Skip to content

EnigmaLabsAI/multiverse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multiverse: The First AI Multiplayer World Model

🌐 Enigma-AI website - 📚 Technical Blog - 🤗 Model on Huggingface - 🤗 Datasets on Huggingface - 𝕏 Multiverse Tweet

Two human players driving cars in Multiverse
Cars in Multiverse

Installation

git clone https://github.com/EnigmaLabsAI/multiverse
cd multiverse
pip install -r requirements.txt

Running the model

python src/play.py --compile

Note on Apple Silicon you must enable CPU fallback for MPS backend with PYTORCH_ENABLE_MPS_FALLBACK=1 python src/play.py

When running this command, you will be prompted with the controls. Press enter to start: img.png

Then the game will be start:

  • To control the silver car at the top screen use the arrow keys.
  • To control the blue car at the bottom use the WASD keys.

img_2.png


Training

Multiverse comprised two models:

  • Denoiser - a world model that simulates a game
  • Upsampler - a model which takes the frames from the denoiser and increases their resolution

Denoiser training

1. Download the dataset

Download the Denoiser's training set from 🤗 Huggingface.

2. Process data for training

Run the command:

python src/process_denoiser_files.py <folder_with_dataset_files_from_step_one> <folder_to_store_processed_data>

3. Edit training configuration

Edit config/env/racing.yaml and set:

  • path_data_low_res to <folder_to_store_processed_data>/low_res
  • path_data_full_res to <folder_to_store_processed_data>/full_res

Edit config/training.yaml to train the denoiser:

train_model: denoiser

4. Launch training run

You can then launch a training run with python src/main.py.

Upsampler training

1. Download the dataset

Download the Upsampler's training set from 🤗 Huggingface.

2. Process data for training

Run the command:

python src/process_upsampler_files.py <folder_with_dataset_files_from_step_one> <folder_to_store_processed_data>

3. Edit training configuration

Edit config/env/racing.yaml and set:

  • path_data_low_res to <folder_to_store_processed_data>/low_res
  • path_data_full_res to <folder_to_store_processed_data>/full_res

Edit config/training.yaml to train the denoiser:

train_model: upsampler

4. Launch training run

You can then launch a training run with python src/main.py.


Datasets

  1. We've collected over 4 hours of multiplayer (1v1) footage from Gran Turismo 4 at a resolution of 48x64 (per players): 🤗 Huggingface link.

  2. A sparse sampling of full resolution, cropped frames, are availabe in order to train the upsampler at a resolution of 350x530: 🤗 Huggingface link.

The datasets contain a variety of situations: acceleration, braking, overtakes, crashes, and expert driving for both players. You can read about the data collection mechanism here

Note: The full resolution dataset is only for upsampler training and is not fit for world model training.


Outside resources

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published