RL-ADN is a Python library for deep reinforcement learning research on energy storage dispatch in active distribution networks. It packages network data, environment logic, baseline optimization code, and fast Laurent power-flow utilities used in the accompanying research line.
Install runtime dependencies:
py -3 -m pip install -r requirements.txtInstall the development toolchain:
py -3 -m pip install -r requirements-dev.txtRun the lightweight test suite:
py -3 -m pytest tests -q -m "not powerflow"Some power-flow validation tests require pandapower. If it is not installed, those tests are skipped automatically.
from rl_adn import PowerNetEnv, make_env_config
config = make_env_config()
env = PowerNetEnv(config)
state = env.reset()Run the script-style quickstart:
py -3 examples/quickstart_env.pyrl_adn/: package source codetests/: smoke and domain validation testsexamples/: script-first quickstart plus notebooksdocs/: Sphinx documentation sources
- Flexible active distribution network environment modeling
- Laurent power flow solver for faster training-time simulation
- DRL algorithms and optimization baselines in the same repository
- Bundled network and time-series datasets for reproducible experiments
The library was originally released alongside the RL-ADN research paper on optimal battery dispatch in distribution networks. The codex/develop branch is being used to harden the repository into a cleaner long-lived development branch for future extensions.
- Run
examples/quickstart_env.pyfor the minimal package-backed environment flow. - Open
examples/Customize_env.ipynbto understand configuration customization. - Open the DDPG training notebook once the environment baseline is clear.
For tutorial-style material, see the project wiki.