Solving Hangman Game using Neural Network
The solution is inspired by the training approach of BERT which is MLM(Masked Language Model). It is a fill-in-the-blank task, where a model uses the context words surrounding a mask token to try to predict what the masked word should be. For example:
Input Text: Hangman is a classic [MASK] game
Label: [MASK] = word
In the game Hangman, it would be:
Input Text: HANG#AN
Label: # = M`
- python3.6
git clone https://github.com/aarontong95/HangmanAI.git
64 GB ram is needed for training. Otherwise you can lower the value of FRAC in config.py which is the proportion of the train split
python train.py
The successful rate is about 44% in the out of sample testing
python test.py
- Generate training sample for the model to learn (preprocess.py)
- Bidirectional LSTM as the model (model.py)
- Hangman game enviornment (game.py)
- Play Hangman game with the model (palyer.py)
- More details in Solution.ipynb