Concise pytorch implements of MARL algorithms, including MAPPO, MADDPG, MATD3, QMIX and VDN.
python==3.7.9
numpy==1.19.4
pytorch==1.5.0
tensorboard==0.6.0
gym==0.10.5
Multi-Agent Particle-World Environment(MPE)
SMAC-StarCraft Multi-Agent Challenge
In order to facilitate switching between discrete action space and continuous action space in MPE environments, we make some small modifications in MPE source code.
We add an argument named 'discrete' in 'make_env.py',which is a bool variable.
We also add an argument named 'discrete' in 'environment.py'.
If your want to use discrete action space mode, you can use 'env=make_env(scenario_name, discrete=True)'
If your want to use continuous action space mode, you can use 'env=make_env(scenario_name, discrete=False)'