DistMLIP is an easy-to-use, efficient library for running graph-parallel, multi-GPU simulations using popular machine learning interatomic potentials (MLIPs).
DistMLIP currently supports zero redundancy multi-GPU inference for MLIPs using graph parallelism. Unlike space partitioning via LAMMPS, there is no redundant calculation being performed.
DistMLIP currently supports the following models:
🚧 This project is under active development
If you see a bug, please raise an issue or notify us. All messages will, at the latest, be responded to within 12 hours.
-
Install PyTorch: https://pytorch.org/get-started/locally/
-
Install DGL here (if using the MatGL models): https://www.dgl.ai/pages/start.html
-
Install DistMLIP from pip
TODO
or from source:
git clone git@github.com:AegisIK/DistMLIP.git
cd DistMLIP
# Only run one of the following installation commands
pip install -e .[matgl] # If you're using CHGNet or TensorNet
pip install -e .[mace] # If you're using MACE
pip install -e .[fairchem] # If you're using UMA
python setup.py build_ext --inplaceDistMLIP is a wrapper library designed to inherit from other models in order to provided distributed inference support. As a result, all features of the original package (whether it's MatGL, MACE, or UMA) will still work. View one of our example notebooks here to get started.
Although it is supported via DistMLIP, it is recommended to finetune your model using the original model library before loading your model into DistMLIP via
from_existingand running distributed inference.
Currently only single node inference is supported. Multi-machine inference is future work.
- Distributing CHGNet
- Distributing TensorNet
- Distributing MACE
- Distributing UMA
- Multi-machine inference
- More works coming soon!
If you use DistMLIP in your research, please cite our paper:
@misc{han2025distmlipdistributedinferenceplatform,
title={DistMLIP: A Distributed Inference Platform for Machine Learning Interatomic Potentials},
author={Kevin Han and Bowen Deng and Amir Barati Farimani and Gerbrand Ceder},
year={2025},
eprint={2506.02023},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2506.02023},
}
If you would like to contribute or want us to parallelize your model, please either raise an issue or email kevinhan@cmu.edu.
- If you have any questions, feel free to raise an issue on this repo.
- If you have any feature requests, please raise an issue on this repo.
- For collaborations and partnerships, please email kevinhan@cmu.edu.
- All requests/issues/inquiries will receive a response within 6-12 hours.
