Skip to content

Code and data for the paper Deep Reinforcement Learning for Inverse Inorganic Materials Design by Karpovich et al.

Notifications You must be signed in to change notification settings

olivettigroup/deep-rl-inorganic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Reinforcement Learning for Inverse Inorganic Materials Design

This repository contains the code and data for the paper Deep Reinforcement Learning for Inverse Inorganic Materials Design by Karpovich et al.

Installation Instructions

  • Clone this repository and navigate to it.
  • Create the conda environment for the PGN tasks. conda env create --name PGN_env --file requirements_PGN.txt
  • For DQN tasks, follow the instructions in the repo linked under DQN (https://github.com/eltonpan/RL_materials_generation)
  • Create the conda environment for the DING tasks. conda env create --name DING_env --file requirements_DING.txt
  • Create the conda environment
  • Switch to the new environment, depending on which notebook you are running. conda activate <env_name>
  • Add the environment to jupyter and activate it python -m ipykernel install --name <env_name>

Data

The full datasets used in the paper are available online. Data must be downloaded to an appropriate data folder before and preprocessed before any of the notebooks can be run. The data used in this work is from the following papers:

Usage

Each folder pertains to a particular task (synthesis route classification or synthesis condition prediction) containing the associated Jupyter notebooks and python code.

  • The PGN folder contains the necessary code for the Policy Gradient Network (PGN) training and evaluation tasks.
    • For instructions to run the PGN files, see the PGN folder README.
  • The DQN folder contains the necessary code for the Deep-Q Network (DQN) training and evaluation tasks, also linked in a separate repo (https://github.com/eltonpan/RL_materials_generation).
    • For instructions to run the DQN models, see the linked repo.
  • The DING folder contains the necessary code for the Deep Inorganic Material Generator (DING) training and evaluation tasks.
  • For instructions to run the DING models, see the DING folder README.
  • The utils folder contains python code to facilitate model training and evaluation.
    • This includes methods to check charge neutrality and electronegativity balance of generated inorganic formulas, as well as functions to calculate metrics such as uniqueness, Element Mover's Distance, and validity.

Cite

If you use or adapt this code in your work please cite as:

@article{karpovich2024deep,
  title={Deep reinforcement learning for inverse inorganic materials design},
  author={Karpovich, Christopher and Pan, Elton and Olivetti, Elsa A},
  journal={npj Computational Materials},
  volume={10},
  number={1},
  pages={287},
  year={2024},
  publisher={Nature Publishing Group UK London}
}

Disclaimer

This is research code shared without support or guarantee of quality. Please report any issues found by opening an issue in this repository.

About

Code and data for the paper Deep Reinforcement Learning for Inverse Inorganic Materials Design by Karpovich et al.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published