PRORL: Proactive Resource Orchestrator for Open RANs Using Deep Reinforcement Learning
This repository contains the implementation of the PRORL framework, which leverages deep reinforcement learning for proactive resource management in Open RAN systems. The code supports multi-run scheduling, training, validation, and evaluation of the reinforcement learning agent.
If you use this code in your work, please cite our paper:
@ARTICLE{StaffolaniPRORL2024,
author = {Staffolani, Alessandro and Darvariu, Victor-Alexandru and Foschini, Luca and Girolami, Michele and Bellavista, Paolo and Foschini, Mirco MusolesiLuca},
journal = {IEEE Transactions on Network and Service Management},
title = {PRORL: Proactive Resource Orchestrator for Open RANs Using Deep Reinforcement Learning},
year = {2024},
volume = {21},
number = {4},
pages = {3933-3944},
keywords = {Resource management, Optimization, Cloud computing, Reinforcement learning, Costs, Dynamic scheduling, Copper, O-RAN, reinforcement learning, resource allocation, multi-objective optimization},
doi = {10.1109/TNSM.2024.3373606}
}
For questions or further information, please contact:
alessandro.staffolani@unibo.it
- Python: Version 3.11
- Docker: Required to set up Redis and MongoDB instances.
- Redis: Used to monitor run progress. A Docker deployment is provided.
- MongoDB: Used to store run configurations and summary results. A Docker deployment is provided.
-
Clone the Repository:
git clone https://github.com/AlessandroStaffolani/prorl-orchestrator.git cd prorl-orchestrator
-
Install Python Dependencies:
pip install -r requirements.txt
-
Set up env variables: use the
.env.sample
file as a template and create the.env
file filled with your variables. -
Start Docker Compose:
The provided Docker Compose file sets up both Redis and MongoDB.
docker compose -f docker/docker-compose.yml up -d
-
Prepare the Dataset:
Follow the instructions in the tim-dataset-pipeline repository to set up the necessary dataset.
The repository supports scheduling runs, training, validation, and evaluation. Each of these processes is handled via the prorl.py
command-line interface.
This command reads a configuration file and generates a set of runs. It stores the run configurations in MongoDB and enqueues a reference in Redis.
python prorl.py run scheduler multi-runs -mcp <config_file_path> -q <queue_name>
Replace <config_file_path>
with the path to your configuration file and <queue_name>
with the desired Redis queue name.
After scheduling the runs, execute the training runs. This command processes runs sequentially until the queue is empty. You can also start multiple workers in parallel to speed up execution.
python prorl.py run worker run-worker -p 0 --stop-empty -q <queue_name>
Validation runs are used to test the performance of the agent on unseen data during training. Execute the following command to start the validation worker:
python prorl.py run worker val-run-worker -p 0 --stop-empty
To evaluate the performance of the trained agent on an additional set of unseen data, run:
python prorl.py run worker eval-run-worker -p 0 --stop-empty