This project is an exploration of dopting tensorflow's neural machine translation model (nmt) to text simplification task. It is similar to Neural Text Simplification which is based on OpenNMT. An interactive demo is served at simpletext.xyz.
- Clone the repository to your local machine recursively:
git clone --recursive https://github.com/captainjtx/SimpleText.git
- Install the python packages:
cd SimpleText
pip install -r requirements.txt
- Download pretrained model into local directory (./model):
mkdir model
python script/download_models.py
- Run inference on one of the pretrained models (seq2seq, 2-hidden layer LSTM with Attention, dropout 0.25, more info). Default input is test/complex.tt, default output is test/inference.txt :
mkdir test
cat "Science Fantasy is a genre where elements of science fiction and fantasy co-exist." > test/complext.txt
./script/test_attention.sh
less test/inference.txt
Our models are trained on the Wikipedia corpus. We performed a further data cleaning on the model to focus only on sentences that are shorter than the original ones (thresholding at 80%). After that, subword tokenization (byte-pair encoding (bpe)) was performed to tackle the out-of-vocabulary problem. A nice jupyter notebook was provided to walk throught the complete preprocessing, including downloading the dataset, thresholding the sentence reduction and subword segmentation.
- Open jupyter notebook:
jupyter notebook
-
Open WikNet_Explore.ipynb and run step by step.
-
Train on the generated dataset using nmt:
./script/train_nmt_attention_bpe.sh