Skip to content

The code for the Selective Evolutionary Multi-Agent Reinforcement Learning with Flocking Environment

Notifications You must be signed in to change notification settings

YunxiaoGuo/SEMARL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Flocking with Selective Evolutionary Multi-Agent Reinforcement Learning (EMARL) Algorithm

This is the Selective Evolutionary Multi-Agent Reinforcement Learning (SEMARL) algorithm implementation on Pytorch version, the corresponding paper is Cooperation and Competition: Flocking with Evolutionary Multi-Agent Reinforcement Learning (accepted by 29th ICONIP, conference version) & Semarl: Selective Evolutionary Multi-Agent Reinforcement Learning for Improving Cooperative Flocking with Competition (submitted, journal version)

Algorithms

  • SEMARL (Selective Evolutionary Multi-Agent Reinforcement Learning, proposed)
  • MADDPG (Multi-Agent Deep Deterministic Policy Gradient)
  • COMA (Counterfactual Multi-Agent Policy Gradient)
  • IQL (Independent Q-Learning (with DNNs))
  • SQDDPG (Shapely Q-value Deep Deterministic Policy Gradient, not realized)

Requirements

  • python=3.8.5
  • torch>=1.13.1

Or download the python environment directly: LG-CS.zip Extract code: MARL

Training Agents with SEMARL

If the python environment LG-CS is loaded, using follow instruction to train 15 agents (5 are senior agents, 5 are junior agents):

python main.py --n=15 --n-senior=5 --n-junior=5

About

The code for the Selective Evolutionary Multi-Agent Reinforcement Learning with Flocking Environment

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages