Skip to content

graphcore-research/graphium-smg

 
 

Repository files navigation

Scaling molecular GNNs to infinity


PyPI Conda PyPI - Downloads Conda license GitHub Repo stars GitHub Repo stars test test-ipu release code-check doc codecov hydra

A deep learning library focused on graph representation learning for real-world chemical tasks.

  • ✅ State-of-the-art GNN architectures.
  • 🐍 Extensible API: build your own GNN model and train it with ease.
  • ⚗️ Rich featurization: powerful and flexible built-in molecular featurization.
  • 🧠 Pretrained models: for fast and easy inference or transfer learning.
  • ⮔ Read-to-use training loop based on Pytorch Lightning.
  • 🔌 Have a new dataset? Graphium provides a simple plug-and-play interface. Change the path, the name of the columns to predict, the atomic featurization, and you’re ready to play!

Documentation

Visit https://graphium-docs.datamol.io/.

Installation for developers

For CPU and GPU developers

Use mamba:

# Install Graphium's dependencies in a new environment named `graphium`
mamba env create -f env.yml -n graphium

# Install Graphium in dev mode
mamba activate graphium
pip install --no-deps -e .

For IPU developers

# Install Graphcore's SDK and Graphium dependencies in a new environment called `.graphium_ipu`
./install_ipu.sh .graphium_ipu

The above step needs to be done once. After that, enable the SDK and the environment as follows:

source enable_ipu.sh .graphium_ipu

Training a model

To learn how to train a model, we invite you to look at the documentation, or the jupyter notebooks available here.

If you are not familiar with PyTorch or PyTorch-Lightning, we highly recommend going through their tutorial first.

Running an experiment

We have setup Graphium with hydra for managing config files. To run an experiment go to the expts/ folder. For example, to benchmark a GCN on the ToyMix dataset run

graphium-train dataset=toymix model=gcn

To change parameters specific to this experiment like switching from fp16 to fp32 precision, you can either override them directly in the CLI via

graphium-train dataset=toymix model=gcn trainer.trainer.precision=32

or change them permanently in the dedicated experiment config under expts/hydra-configs/toymix_gcn.yaml. Integrating hydra also allows you to quickly switch between accelerators. E.g., running

graphium-train dataset=toymix model=gcn accelerator=gpu

automatically selects the correct configs to run the experiment on GPU. Finally, you can also run a fine-tuning loop:

graphium-train +finetuning=admet

To use a config file you built from scratch you can run

graphium-train --config-path [PATH] --config-name [CONFIG]

Thanks to the modular nature of hydra you can reuse many of our config settings for your own experiments with Graphium.

Preparing the data in advance

The data preparation including the featurization (e.g., of molecules from smiles to pyg-compatible format) is embedded in the pipeline and will be performed when executing graphium-train [...].

However, when working with larger datasets, it is recommended to perform data preparation in advance using a machine with sufficient allocated memory (e.g., ~400GB in the case of LargeMix). Preparing data in advance is also beneficial when running lots of concurrent jobs with identical molecular featurization, so that resources aren't wasted and processes don't conflict reading/writing in the same directory.

The following command-line will prepare the data and cache it, then use it to train a model.

# First prepare the data and cache it in `path_to_cached_data`
graphium data prepare ++datamodule.args.processed_graph_data_path=[path_to_cached_data]

# Then train the model on the prepared data
graphium-train [...] datamodule.args.processed_graph_data_path=[path_to_cached_data]

Note that datamodule.args.processed_graph_data_path can also be specified at expts/hydra_configs/.

Note that, every time the configs of datamodule.args.featurization changes, you will need to run a new data preparation, which will automatically be saved in a separate directory that uses a hash unique to the configs.

License

Under the Apache-2.0 license. See LICENSE.

Documentation

  • Diagram for data processing in Graphium.

Data Processing Chart

  • Diagram for Muti-task network in Graphium

Full Graph Multi-task Network

About

Graphium fork for Scaling Molecular GNNs project at Graphcore

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 61.0%
  • Jupyter Notebook 38.7%
  • Shell 0.3%