This repository is the implementation of the paper Meta Learning Task Representation in Multi-Agent Reinforcement Learning: from Global Inference to Local Inference.
We propose MG2L, a mutual-information-based Global-to-Local training scheme with a multi-level task encoder. A centralized global representation is learned by maximizing MI with the task, while agents minimize conditional MI reduction to align local representations with global context. MG2L provides a versatile solution for meta-MARL.
The source code of MAMujoCo and MPE has been included in this repository, but you still need to install OpenAI gym, mujoco-py, RWARE and MAgent support.
conda create -n mg2l python=3.8
conda activate mg2l
pip install gym==0.21.0 mujoco_py==2.1.2.14 omegaconf rware==1.0.3You can run the experiments by the following command:
python train.py --expt=default --algo=mg2l --env=mujoco-cheetah-dir gpu_id=0The --env flag can be followed with any existing config name in the mg2l/config/algo_config/ directory,
and any other config named xx (such as gpu_id) can be passed by xx=value.
Our code is built upon MAPPO and MATE. We thank all these authors for their nicely open sourced code and their great contributions to the community.
@article{zhao2025mg2l,
author={Zhao, Zijie and Fu, Yuqian and Chai, Jiajun and Zhu, Yuanheng and Zhao, Dongbin},
journal={IEEE Transactions on Neural Networks and Learning Systems},
title={Meta Learning Task Representation in Multiagent Reinforcement Learning: From Global Inference to Local Inference},
year={2025},
volume={36},
number={8},
pages={14908-14921}
}



