Skip to content

txd866/Variational-LSTM-Autoencoder

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational LSTM-Autoencoder

This project implements the Variational LSTM sequence to sequence architecture for a sentence auto-encoding task. In general, I follow the paper "Variational Recurrent Auto-encoders" and "Generating Sentences from a Continuous Space". Most of the implementations about the variational layer are adapted from "y0ast/VAE-torch".

Descriptions

Following the above two papers, the variational layer is only added in between the last hidden state of the encoder and the first hidden state of the decoder, with the following steps:

  1. Compute mean and variance of the posterior q from the last hidden state, with a 2-layer mlp encoder

  2. Compute KLD loss between the estimated posterior q(z|x) and the enforced prior p(z)

  3. Collect a noise sample with reparameterization

  4. Get the first hidden state of the decoder with a 2-layer mlp decoder

Dependencies

This code requires Torch7 and nngraph

Usage

  • training on GPU: th VLSTM-Autoencoder.lua -gpuid 0
  • sampling on GPU: th sample.lua -gpuid 0 -cv cv/checkpoint -data dataset/test

About

Variational Seq2Seq model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Lua 100.0%