Skip to content

Supplementary code for "In Search for Architectures and Loss Functions in Multi-Objective Reinforcement Learning"

License

Notifications You must be signed in to change notification settings

CLAIRE-Labo/tunable-morl-public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Policy Optimization for Dynamic Multi-Objective Reinforcement Learning

This repository contains the supplementary code for the paper

Terekhov, M., & Gulcehre, C. In Search for Architectures and Loss Functions in Multi-Objective Reinforcement Learning. In ICML 2024 Workshop: Aligning Reinforcement Learning Experimentalists and Theorists.

Installation

For reproducible experiments, we used Docker containers for dependency management. See the installation readme for more details on building the container within or outside the EPFL ecosystem. We also provide the environment.yml file for a conda environment which can be used without a container, but with less reproducibility guarantees.

Overview

The implementation here provides the main algorithm as well as all actor/critic architectures described in the paper. The entry point to run our algorithm is the train_moe.py script. We use Hydra for configuration management. The default configuration can be found here.

Built With

This repository is based on the template for reproducible code by Skander Moalla. The code is written with the TorchRL library. We used MORL-baselines as a source of state-of-the-art algorithms for multi-objective reinforcement learning.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Supplementary code for "In Search for Architectures and Loss Functions in Multi-Objective Reinforcement Learning"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published