Skip to content
/ smpe Public

[ICML 2025] Official Code of SMPE: "Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration"

License

Notifications You must be signed in to change notification settings

ddaedalus/smpe

Repository files navigation

Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration (Accepted at ICML 2025)

Andreas Kontogiannis1,2,*·Konstantinos Papathanasiou3,*·Yi Shen4·
Giorgos Stamou1 ·Michael Μ. Zavlanos4 ·George Vouros5
1 National Technical University of Athens   2 Archimedes AI     3 ETH Zurich  
4 Duke University   5 University of Piraeus  


smpe

Abstract

Learning to cooperate in distributed partially observable environments with no communication abilities poses significant challenges for multi-agent deep reinforcement learning (MARL). This paper addresses key concerns in this domain, focusing on inferring state representations from individual agent observations and leveraging these representations to enhance agents' exploration and collaborative task execution policies. To this end, we propose a novel state modelling framework for cooperative MARL, where agents infer meaningful belief representations of the non-observable state, with respect to optimizing their own policies, while filtering redundant and less informative joint state information. Building upon this framework, we propose the MARL SMPE algorithm. In SMPE, agents enhance their own policy's discriminative abilities under partial observability, explicitly by incorporating their beliefs into the policy network, and implicitly by adopting an adversarial type of exploration policies which encourages agents to discover novel, high-value states while improving the discriminative abilities of others. Experimentally, we show that SMPE outperforms state-of-the-art MARL algorithms in complex fully cooperative tasks from the MPE, LBF, and RWARE benchmarks.

Paper Link

LBF command line

python3 main.py --config=smpe_lbf --env-config=gymma with env_args.time_limit=50 env_args.key="Foraging-2s-9x9-3p-2f-coop-v2"

MPE command line

python3 main.py --config=smpe_mpe --env-config=gymma with env_args.time_limit=25 env_args.key="mpe:SimpleSpread-v0"

RWARE command line

python3 main.py --config=smpe_lbf --env-config=gymma with env_args.time_limit=500 env_args.key="rware:rware-tiny-4ag-hard-v1"

If you are using SMPE in your research, please cite:

@inproceedings{
kontogiannis2025enhancing,
title={Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration},
author={Andreas Kontogiannis and Konstantinos Papathanasiou and Yi Shen and Giorgos Stamou and Michael M. Zavlanos and George Vouros},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=TCsdlqzZNL}
}

About

[ICML 2025] Official Code of SMPE: "Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages