Reproducing Hu, et. al., ICML 2017's "Toward Controlled Generation of Text" in PyTorch. This work is for University of Bonn's NLP Lab project on Winter Semester 2017/2018.
We modified the softmax layer of VAE. Each GRU time step produces a vector with length equal to word embeddings. We tried to minimize distance between pretrained word embeddings (GloVe) and the output of each time step. In this way, we can avoid using softmax layer, and feed the output embedding directly as an input to the next time step. Refer to 'nosoftmax' branch for more implementations.
- Python 3.5+
- PyTorch 0.3
- TorchText https://github.com/pytorch/text
- Run
python train_vae.py --save {--gpu}
. This will createvae.bin
. Essentially this is the base VAE as in Bowman, 2015 [2]. - Run
python train_discriminator --save {--gpu}
. This will createctextgen.bin
. The discriminator is using Kim, 2014 [3] architecture and the training procedure is as in Hu, 2017 [1]. - Run
test.py --model {vae, ctextgen}.bin {--gpu}
for basic evaluations, e.g. conditional generation and latent interpolation.
- Only conditions the model with sentiment, i.e. no tense conditioning.
- Entirely using SST dataset, which has only ~2800 sentences after filtering. This might not be enough and leads to overfitting. The base VAE in the original model by Hu, 2017 [1] is trained using larger dataset first.
- Obviously most of the hyperparameters values are different.
- Hu, Zhiting, et al. "Toward controlled generation of text." International Conference on Machine Learning. 2017. [pdf]
- Bowman, Samuel R., et al. "Generating sentences from a continuous space." arXiv preprint arXiv:1511.06349 (2015). [pdf]
- Kim, Yoon. "Convolutional neural networks for sentence classification." arXiv preprint arXiv:1408.5882 (2014). [pdf]