Skip to content

Latest commit

 

History

History

2.MAPPO_SMAC

MAPPO in StarCraft II Environment

This is a concise Pytorch implementation of MAPPO in StarCraft II environment(SMAC-StarCraft Multi-Agent Challenge).

How to use my code?

You can dircetly run 'MAPPO_SMAC_main.py' in your own IDE.

Trainning environments

You can set the 'env_index' in the codes to change the maps in StarCraft II. Here, we train our code in 3 maps.
env_index=0 represent '3m'
env_index=1 represent '8m'
env_index=2 represent '2s_3z'

Requirements

python==3.7.9
numpy==1.19.4
pytorch==1.12.0
tensorboard==0.6.0
SMAC-StarCraft Multi-Agent Challenge

Trainning results

image

Reference

[1] Yu C, Velu A, Vinitsky E, et al. The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games[J]. arXiv preprint arXiv:2103.01955, 2021.
[2] Official implementation of MAPPO
[3] EPyMARL