This repository contains the code and data for the paper Deep Reinforcement Learning for Inverse Inorganic Materials Design by Karpovich et al.
- Clone this repository and navigate to it.
- Create the conda environment for the PGN tasks.
conda env create --name PGN_env --file requirements_PGN.txt
- For DQN tasks, follow the instructions in the repo linked under
DQN
(https://github.com/eltonpan/RL_materials_generation) - Create the conda environment for the DING tasks.
conda env create --name DING_env --file requirements_DING.txt
- Create the conda environment
- Switch to the new environment, depending on which notebook you are running.
conda activate <env_name>
- Add the environment to jupyter and activate it
python -m ipykernel install --name <env_name>
The full datasets used in the paper are available online. Data must be downloaded to an appropriate data
folder before and preprocessed before any of the notebooks can be run. The data used in this work is from the following papers:
- Kononova, O., Huo, H., He, T., Rong Z., Botari, T., Sun, W., Tshitoyan, V. and Ceder, G. Text-mined dataset of inorganic materials synthesis recipes. Sci Data 6, 203 (2019). (https://doi.org/10.1038/s41597-019-0224-1)
- Github link to dataset: (https://github.com/CederGroupHub/text-mined-synthesis_public)
- Jain, A., Ong, S. P., Hautier, G., Chen, W., Richards, W. D., Dacek, S., ... & Persson, K. A. Commentary: The Materials Project: A materials genome approach to accelerating materials innovation. APL materials, 1, 1 (2013).
- Link to Materials Project: (https://next-gen.materialsproject.org/)
Each folder pertains to a particular task (synthesis route classification or synthesis condition prediction) containing the associated Jupyter notebooks and python code.
- The
PGN
folder contains the necessary code for the Policy Gradient Network (PGN) training and evaluation tasks.- For instructions to run the PGN files, see the
PGN
folder README.
- For instructions to run the PGN files, see the
- The
DQN
folder contains the necessary code for the Deep-Q Network (DQN) training and evaluation tasks, also linked in a separate repo (https://github.com/eltonpan/RL_materials_generation).- For instructions to run the DQN models, see the linked repo.
- The
DING
folder contains the necessary code for the Deep Inorganic Material Generator (DING) training and evaluation tasks. - For instructions to run the DING models, see the
DING
folder README. - The
utils
folder contains python code to facilitate model training and evaluation.- This includes methods to check charge neutrality and electronegativity balance of generated inorganic formulas, as well as functions to calculate metrics such as uniqueness, Element Mover's Distance, and validity.
If you use or adapt this code in your work please cite as:
@article{karpovich2024deep,
title={Deep reinforcement learning for inverse inorganic materials design},
author={Karpovich, Christopher and Pan, Elton and Olivetti, Elsa A},
journal={npj Computational Materials},
volume={10},
number={1},
pages={287},
year={2024},
publisher={Nature Publishing Group UK London}
}
This is research code shared without support or guarantee of quality. Please report any issues found by opening an issue in this repository.