This project trains and tests a genetic / heuristic-based Tetris AI. It contains two main scripts:
tetris.py
— a trainer that evolves genomes (heuristic weight sets).tetris_play_genome.py
— a tester that plays a single genome (visual or headless).
Genomes are stored as JSON files. Example genomes are provided in the genomes/
directory.
.
├── genomes/ # Example genomes (JSON). Users can contribute via PRs.
├── tetris.py # Evolutionary trainer
├── tetris_play_genome.py # Play / test a single genome
└── README.md # This file
git clone https://github.com/entity12208/Tetris_AI.git
cd Tetris_AI
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install pygame
Note
If you would like to contribute or you would like to train models from scratch, delete the genomes
directory. A fresh one with scratch data will be generated during training.
python tetris.py
- Adjust training parameters directly in
tetris.py
. - New genomes will be generated and improved over time.
python tetris_play_genome.py --genome-file genomes/best.json
Genomes are JSON objects describing heuristic weights and speed:
{
"id": 80,
"genome": {
"w_height": -0.62,
"w_lines": 1.02,
"w_holes": -1.24,
"w_bump": -0.24,
"speed": 9
},
"meta": {
"score": 0,
"games_played": 0,
"lines_cleared_total": 0,
"updated": "2025-10-02T18:16:33Z"
}
}
genome
contains the actual weights used for evaluation.id
andmeta
are optional metadata produced by the trainer.tetris_play_genome.py
accepts both wrapper-format JSONs and plain genome dicts.
tetris_play_genome.py
supports multiple options for running genomes:
--genome : Inline JSON genome (string)
--genome-file : Path to a genome JSON file
--nes-weight : Weight applied to NES scoring bonus (default 0.02)
--headless : Run without rendering (faster, batch mode)
--games N : Number of games to run (default: 1)
--silent : Suppress per-game prints in headless mode
--quiet-all : Suppress all console output
--log-file PATH : Save per-game results as CSV
--show-window : Show final game board (after headless runs)
--replay : Animate last game replay when using --show-window
Run with visual window:
python tetris_play_genome.py --genome-file genomes/best.json
Headless 100 runs, log results, no console output:
python tetris_play_genome.py --genome-file genomes/best.json --headless --games 100 --log-file results.csv --quiet-all
Replay the last headless game visually:
python tetris_play_genome.py --genome-file genomes/best.json --headless --games 10 --replay --show-window
-
NES-style scoring is used for line clears:
- 1 line → 40
- 2 lines → 100
- 3 lines → 300
- 4 lines → 1200
-
Heuristic features:
- Aggregate height
- Lines cleared
- Holes
- Bumpiness
-
Fitness is a combination of heuristic evaluation and NES scoring (scaling via
--nes-weight
).
The AI improves its play with every game it experiences. At first, it plays worse than a beginner, but it rapidly develops strategies and adapts. Over time, its skill level progresses through clear stages:
Games Played | Intelligence Level |
---|---|
0 – 10 | Level 0 (random, unfocused) |
10 – 100 | Level 1 (basic survival skills) |
100 – 1,000 | Level 2 (strategic play, consistent clears) |
1,000 – 5,000 | Level 3 (optimized stacking, high efficiency) |
5,000+ | Level 4 (increased foresight, near-perfect play) |
This progression shows how the AI evolves from random moves to mastery, and eventually surpasses human-level performance with enough training.
- Increase population size for more diversity.
- Run more games per genome to reduce randomness.
- Tune the mutation rate (higher = more exploration, lower = stability).
- Use elite retention to keep top genomes each generation.
- Train in headless mode to speed up iteration.
Our AI shows consistent improvement through evolutionary training:
- Example genomes are in the
genomes/
directory. You can contribute your own via Pull Requests. - If you encounter bugs, feature requests, or ideas, open an Issue in GitHub.
This project is licensed under the MIT License. See LICENSE for details.