Skip to content

Commit

Permalink
Merge pull request #438 from datamol-io/website
Browse files Browse the repository at this point in the history
Updating the website
  • Loading branch information
DomInvivo authored Aug 18, 2023
2 parents 697d7f2 + 56b4a43 commit 64c1cde
Show file tree
Hide file tree
Showing 10 changed files with 167 additions and 126 deletions.
7 changes: 7 additions & 0 deletions docs/baseline.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,10 @@ One can observe that the smaller datasets (`Zinc12k` and `Tox21`) beneficiate fr
| **Tox21** | GCN | 0.202 ± 0.005 | 0.773 ± 0.006 | 0.334 ± 0.03 | **0.176 ± 0.001** | **0.850 ± 0.006** | 0.446 ± 0.01 |
| | GIN | 0.200 ± 0.002 | 0.789 ± 0.009 | 0.350 ± 0.01 | 0.176 ± 0.001 | 0.841 ± 0.005 | 0.454 ± 0.009 |
| | GINE | 0.201 ± 0.007 | 0.783 ± 0.007 | 0.345 ± 0.02 | 0.177 ± 0.0008 | 0.836 ± 0.004 | **0.455 ± 0.008** |

# LargeMix Baseline
Coming soon!

# UltraLarge Baseline
Coming soon!

30 changes: 23 additions & 7 deletions docs/contribute.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,36 @@
# Contribute

The below documents the development lifecycle of Graphium.
We are happy to see that you want to contribute 🤗.
Feel free to open an issue or pull request at any time. But first, follow this page to install Graphium in dev mode.

## Setup a dev environment
## Installation for developers

### For CPU and GPU developers

Use [`mamba`](https://github.com/mamba-org/mamba), a preferred alternative to conda, to create your environment:

```bash
mamba env create -n graphium -f env.yml
mamba activate graphium
# Install Graphium's dependencies in a new environment named `graphium`
mamba env create -f env.yml -n graphium

# Install Graphium in dev mode
mamba activate graphium
pip install --no-deps -e .
```

## Run tests
### For IPU developers

Download the SDK and use pypi to create your environment:

```bash
# Install Graphcore's SDK and Graphium dependencies in a new environment called `.graphium_ipu`
./install_ipu.sh .graphium_ipu
```

The above step needs to be done once. After that, enable the SDK and the environment as follows:

```bash
pytest
source enable_ipu.sh .graphium_ipu
```

## Build the documentation
Expand All @@ -23,5 +39,5 @@ You can build and serve the documentation locally with:

```bash
# Build and serve the doc
mike serve
mkdocs serve
```
Binary file added docs/dataset_abstract.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
85 changes: 68 additions & 17 deletions docs/datasets.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,78 @@
# Graphium Datasets

Graphium datasets are hosted at on Google Cloud Storage at `gs://graphium-public/datasets`. Graphium provides a convenient utility functions to list and download those datasets:
Graphium datasets are hosted at on Zenodo on [this link](https://zenodo.org/record/8206704).

```python
import graphium
Instead of provinding datasets as a single entity, our aim is to provide dataset mixes containing a variety of datasets that are meant to be predicted simultaneously using multi-tasking.

dataset_dir = "/my/path"
data_path = graphium.data.utils.download_graphium_dataset("graphium-zinc-micro", output_path=dataset_dir)
print(data_path)
# /my/path/graphium-zinc-micro
```
They are visually described in this image, with detailed description below.
![Visual description of the ToyMix, LargeMix, UltraLarge datasets](dataset_abstract.png)

## `graphium-zinc-micro`
## ToyMix (QM9 + Tox21 + Zinc12K)

ADD DESCRIPTION.
The ***ToyMix*** dataset combines the ***QM9***, ***Tox21***, and ***Zinc12K*** datasets. These datasets are well-known in the literature and used as toy datasets, or very simple datasets, in various contexts to enable fast iterations of models. By regrouping toy datasets from quantum ML, drug discovery, and GNN expressivity, we hope that the learned model will be representative of the model performance we can expect on the larger datasets.

- Number of molecules: xxx
- Label columns: xxx
- Split informations.
### Train/Validation/Test Splits
for all the datasets in ***ToyMix*** are split randomly with a ratio of 0.8/0.1/0.1. Random splitting is used since it is the simplest and fits the idea of having a toy dataset well.

## `graphium-zinc-bench-gnn`
### QM9
is a well-known dataset in the field of 3D GNNs. It consists of 19 graph-level quantum properties associated to an energy-minimized 3D conformation of the molecules [1]. It is considered a simple dataset since all the molecules have at most 9 heavy atoms. We chose QM9 in our ***ToyMix*** since it is very similar to the larger proposed quantum datasets, PCQM4M\_multitask and PM6\_83M, but with smaller molecules.

ADD DESCRIPTION.

- Number of molecules: xxx
- Label columns: xxx- Split informations.
### Tox21
is a well-known dataset for researchers in machine learning for drug discovery [2]. It consists of a multi-label classification task with 12 labels, with most labels missing and a strong imbalance towards the negative class. We chose ***Tox21*** in our ***ToyMix*** since it is very similar to the larger proposed bioassay dataset, ***PCBA\_1328\_1564k*** both in terms of sparsity and imbalance and to the ***L1000*** datasets in terms of imbalance.

### ZINC12k
is a well-known dataset for researchers in GNN expressivity [3]. We include it in our ***ToyMix*** since GNN expressivity is very important for performance on large-scale data. Hence, we hope that the performance on this task will correlate well with the performance when scaling.

## LargeMix (PCQM4M + PCBA1328 + L1000)
In this section, we present the ***LargeMix*** dataset, comprised of four different datasets with tasks taken from quantum chemistry (***PCQM4M***), bio-assays (***PCBA***) and transcriptomics.

### Train/validation/test/test\_seen
Splits For the ***PCQM4M\_G25\_N4***, we create a 0.92/0.04/0.04 split. Then, for all the other datasets in ***LargeMix***, we first create a "test\_seen" split by taking the set of molecules from ***L1000*** and ***PCBA1328*** that are also present in the training set of ***PCQM4M\_G25\_N4***, such that we can evaluate whether having the quantum properties of a molecule helps generalize for biological properties. For the remaining parts, we split randomly with a ratio of 0.92/0.04/0.04.



### L1000 VCAP and MCF7
The ***LINCS L1000*** is a database of high-throughput transcriptomics that screened more than 30,000 perturbations on a set of 978 landmark genes [4] from multiple cell lines. ***VCAP*** and ***MCF7*** are, respectively, prostate cancer and human breast cancer cell lines. In ***L1000***, most of the perturbagens are chemical, meaning that small drug-like molecules are added to the cell lines to observe how the gene expressions change. This allows to generate biological signatures of the molecules, which are known to correlate with drug activity and side effects.


To process the data into our two datasets comprising the ***VCAP*** and ***MCF7*** cell lines, we used their "level 5" data composed of the cleanup data converted to z-scores, and filtered to keep only chemical perturbagens. However, we were left with multiple data points per molecule since some variables could change (e.g., incubation time) and generate a new measure. Given our objective of generating a single signature per molecule, we decided to take the measurements with the strongest global activity such that the variance over the 978 genes is maximal. Then, since these signatures are generally noisy, we binned them into five classes corresponding to z-scores based on the thresholds $\{-4, -2, 2, 4\}$.

The cell lines ***VCAP*** and ***MCF7*** were selected since they have a higher number of unique molecule perturbagens than other cell lines. They also have a relatively lower data imbalance, with ~92% falling in the "neutral class" when the z-score was between -2 and 2.

### PCBA1328
This dataset is very similar to the ***OGBG-PCBA*** dataset [5], but instead of being limited to 128 assays and 437k molecules, it comprises 1,328 assays and 1.56M molecules. This dataset is very interesting for pre-training molecular models since it contains information about a molecule's behavior in various settings relevant to biochemists, with evidence that it improves binding predictions. Analogous to the gene expression, we obtain a bio-assay-expression of each molecule.

To gather the data, we have looped over the PubChem index of bioassays [6] and collected every dataset such that it contains more than 6,000 molecules annotated with either ``Active'' or ``Inactive'' and at least 10 of each. Then, we converted all the molecular IDs to canonical SMILES and used it to merge all of the bioassays into a single dataset.

### PCQM4M\_G25\_N4
This dataset comes from the same data source as the ***OGBG-PCQM4M*** dataset, famously known for being part of the OGB large-scale challenge [7] and being one of the only graph datasets where pure Transformers have proven successful. The data source is the PubChemQC project [8] that computed DFT properties on the energy-minimized conformation of 3.8M small molecules from PubChem.

Contrarily to the OGB challenge, we aim to provide enough data for pre-training GNNs, so we do not limit ourselves to the HOMO-LUMO gap prediction [7]. Instead, we gather properties directly given by the DFT (e.g., energies) and compute other 3D descriptors from the conformation (e.g., inertia, the plane of best fit). We also gather node-level properties, the Mulliken and Lowdin charges at each atom. Furthermore, about half of the molecules have time-dependent DFT to help inform about the molecule's excited state. Looking forward, we plan on adding edge-level tasks to enable the prediction of bond properties, such as their lengths and the gradient of the charges.


## UltraLarge Dataset
### PM6\_83M
This dataset is similar to the ***PCQM4M*** and comes from the same PubChemQC project. However, it uses the PM6 semi-empirical computation of the quantum properties, which is orders of magnitude faster than DFT computation at the expense of less accuracy [8, 9].

This dataset covers 83M unique molecules, 62 graph-level tasks, and 7 node-level tasks. To our knowledge, this is the largest dataset available for training 2D-GNNs regarding the number of unique molecules. The various tasks come from four different molecular states, namely ``S0'' for the ground state, ``T0'' for the lowest energy triplet excited state, ``cation'' for the positively charged state, and ``anion'' for the negatively charged state. In total, there are 221M PM6 computations.

## References
[1] https://www.nature.com/articles/sdata201422/

[2] https://europepmc.org/article/MED/23603828

[3] https://arxiv.org/abs/2003.00982v3

[4] https://pubmed.ncbi.nlm.nih.gov/29195078/

[5] https://arxiv.org/abs/2005.00687

[6] https://pubmed.ncbi.nlm.nih.gov/26400175/

[7] https://arxiv.org/abs/2103.09430

[8] https://pubs.acs.org/doi/10.1021/acs.jcim.7b00083

[9] https://arxiv.org/abs/1904.06046

104 changes: 33 additions & 71 deletions docs/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,102 +2,64 @@

---

### Diagram for data processing in molGPS.

<img src="images/datamodule.png" alt= "Data Processing Chart" width="100%" height="100%">



### Diagram for Muti-task network in molGPS

<img src="images/full_graph_network.png" alt= "Full Graph Multi-task Network" width="100%" height="100%">

The library is designed with 3 things in mind:

- High modularity and configurability with *YAML* files
- Contain the state-of-the art GNNs, including positional encodings and graph Transformers
- Massively multitasking across diverse and sparse datasets

The current page will walk you through the different aspects of the design that enable that.

### Diagram for data processing in Graphium.

First, when working with molecules, there are tons of options regarding atomic and bond featurisation that can be extracted from the periodic table, from empirical results, or from simulated 3D structures.

**Section from the previous README:**
Second, when working with graph Transformers, there are plenty of options regarding the positional and structural encodings (PSE) that are fundamental in driving the accuracy and the generalization of the models.

### Data setup
With this in mind, we propose a very versatile chemical and PSE encoding, alongside an encoder manager, that can be fully configured from the yaml files. The idea is to assign matching *input keys* to both the features and the encoders, then pool the outputs according to the *output keys*. It is better summarized in the image below.

Then, you need to download the data needed to run the code. Right now, we have 2 sets of data folders, present in the link [here](https://drive.google.com/drive/folders/1RrbNZkEE2rf41_iroa1LbIyegW00h3Ql?usp=sharing).
<img src="images/datamodule.png" alt= "Data Processing Chart" width="100%" height="100%">

- **micro_ZINC** (Synthetic dataset)
- A small subset (1000 mols) of the ZINC dataset
- The score is the subtraction of the computed LogP and the synthetic accessibility score SA
- The data must be downloaded to the folder `./graphium/data/micro_ZINC/`

- **ZINC_bench_gnn** (Synthetic dataset)
- A subset (12000 mols) of the ZINC dataset
- The score is the subtraction of the computed LogP and the synthetic accessibility score SA
- These are the same 12k molecules provided by the [Benchmarking-gnn](https://github.com/graphdeeplearning/benchmarking-gnns) repository.
- We provide the pre-processed graphs in `ZINC_bench_gnn/data_from_benchmark`
- We provide the SMILES in `ZINC_bench_gnn/smiles_score.csv`, with the train-val-test indexes in the file `indexes_train_val_test.csv`.
- The first 10k elements are the training set
- The next 1k the valid set
- The last 1k the test set.
- The data must be downloaded to the folder `./graphium/data/ZINC_bench_gnn/`

Then, you can run the main file to make sure that all the dependancies are correctly installed and that the code works as expected.
### Diagram for Muti-task network in Graphium

```bash
python expts/main_micro_zinc.py
```
As mentioned, we want to be able to pperform massive multi-tasking to enable us to work across a huge diversity of datasets. The idea is to use a combination of multiple task-heads, where a different MLP is applied to each task predictions. However, it is also designed such that each task can have as many labels as desired, thus enabling to group labels together according to whether they should share weights/losses.

---
The design is better explained in the image below. Notice how the *keys* outputed by GraphDict are used differently across the different GNN layers.

**TODO: explain the internal design of Graphium so people can contribute to it more easily.**
<img src="images/full_graph_network.png" alt= "Full Graph Multi-task Network" width="100%" height="100%">

## Structure of the code

The code is built to rapidly iterate on different architectures of neural networks (NN) and graph neural networks (GNN) with Pytorch. The main focus of this work is molecular tasks, and we use the package `rdkit` to transform molecular SMILES into graphs.

### data_parser

This folder contains tools that allow tdependenciesrent kind of molecular data files, such as `.csv` or `.xlsx` with SMILES data, or `.sdf` files with 3D data.


### features

Different utilities for molecules, such as Smiles to adjacency graph transformer, molecular property extraction, atomic properties, bond properties, ...

**_The MolecularTransformer and AdjGraphTransformer come from ivbase, but I don't like them. I think we should replace them with something simpler and give more flexibility for combining one-hot embedding with physical properties embedding._**.

### trainer

The trainer contains the interface to the `pytorch-lightning` library, with `PredictorModule` being the main class used for any NN model, either for regression or classification. It also contains some modifications to the logger from `pytorch-lightning` to enable more flexibility.

### utils

Any kind of utilities that can be used anywhere, including argument checkers and configuration loader

### visualization

Plot visualization tools

## Modifying the code

### Adding a new GNN layer

Any new GNN layer must inherit from the class `graphium.nn.base_graph_layer.BaseGraphLayer` and be implemented in the folder `graphium/nn/pyg_layers`, imported in the file `graphium/nn/architectures.py`, and in the same file, added to the function `FeedForwardGraph._parse_gnn_layer`.

To be used in the configuration file as a `graphium.model.layer_name`, it must also be implemented with some variable parameters in the file `expts/config_gnns.yaml`.
Below are a list of directory and their respective documentations:

### Adding a new NN architecture
- cli
- [config](https://github.com/datamol-io/graphium/blob/main/graphium/config/README.md)
- [data](https://github.com/datamol-io/graphium/blob/main/graphium/data/README.md)
- [features](https://github.com/datamol-io/graphium/tree/main/graphium/features/README.md)
- finetuning
- [ipu](https://github.com/datamol-io/graphium/tree/main/graphium/ipu/README.md)
- [nn](https://github.com/datamol-io/graphium/tree/main/graphium/nn/README.md)
- [trainer](https://github.com/datamol-io/graphium/tree/main/graphium/trainer/README.md)
- [utils](https://github.com/datamol-io/graphium/tree/main/graphium/features/README.md)
- [visualization](https://github.com/datamol-io/graphium/tree/main/graphium/visualization/README.md)

All NN and GNN architectures compatible with the `pyg` library are provided in the file `graphium/nn/global_architectures.py`. When implementing a new architecture, it is highly recommended to inherit from `graphium.nn.architectures.FeedForwardNN` for regular neural networks, from `graphium.nn.global_architectures.FeedForwardGraph` for pyg neural network, or from any of their sub-classes.

### Changing the PredictorModule and loss function
## Structure of the configs

The `PredictorModule` is a general pytorch-lightning module that should work with any kind of `pytorch.nn.Module` or `pl.LightningModule`. The class defines a structure of including models, loss functions, batch sizes, collate functions, metrics...
Making the library very modular requires to have configuration files that have >200 lines, which becomes intractable, especially when we only want to have minor changes between configurations.

Some loss functions are already implemented in the PredictorModule, including `mse, bce, mae, cosine`, but some tasks will require more complex loss functions. One can add any new function in `graphium.trainer.predictor.PredictorModule._parse_loss_fun`.
Hence, we use [hydra](https://hydra.cc/docs/intro/) to enable splitting the configurations into smaller and composable configuration files.

### Changing the metrics used
Examples of possibilities include:

**_!WARNING! The metrics implementation was done for pytorch-lightning v0.8. There has been major changes to how the metrics are used and defined, so the whole implementation must change._**
- Switching between accelerators (CPU, GPU and IPU)
- Benchmarking different models on the same dataset
- Fine-tuning a pre-trained model on a new dataset

Our current code is compatible with the metrics defined by _pytorch-lightning_, which include a great set of metrics. We also added the PearsonR and SpearmanR as they are important correlation metrics. You can define any new metric in the file `graphium/trainer/metrics.py`. The metric must inherit from `TensorMetric` and must be added to the dictionary `graphium.trainer.metrics.METRICS_DICT`.
[In this document](https://github.com/datamol-io/graphium/tree/main/expts/hydra-configs#readme), we describe in details how each of the above functionality is achieved and how users can benefit from this design to achieve the most with Graphium with as little configuration as possible.

To use the metric, you can easily add it's name from `METRICS_DICT` in the yaml configuration file, at the address `metrics.metrics_dict`. Each metric has an underlying dictionnary with a mandatory `threshold` key containing information on how to threshold the prediction/target before computing the metric. Any `kwargs` arguments of the metric must also be added.
Loading

0 comments on commit 64c1cde

Please sign in to comment.