Skip to content

yanmingl/NaturalLanguageProcessing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

We use sequence models to encode textual data/decode responses and plans to experiment with various RL agents on the Cornell Movie Dialogues dataset. Starting from Seq2Seq model as the baseline, transformer was adapted for faster training process and better metric value. Furthermore, we also expect positive emotional response with the reinforcement learning. Finally, deploy the best model after tuning into a webiste/app would be worthwhile to try.

Still working on this project...

About

Implemented chatbot using seq2seq model and transformer.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published