From 88c009c3b5228f7aed0767166a5793a870d2f440 Mon Sep 17 00:00:00 2001 From: kerstink-GC Date: Tue, 12 Mar 2024 18:03:04 +0000 Subject: [PATCH] deleted Paperspace specific notebook --- .../running-multitask-ipu.ipynb | 578 ------------------ 1 file changed, 578 deletions(-) delete mode 100644 docs/tutorials/model_training/running-multitask-ipu.ipynb diff --git a/docs/tutorials/model_training/running-multitask-ipu.ipynb b/docs/tutorials/model_training/running-multitask-ipu.ipynb deleted file mode 100644 index 05a972e9b..000000000 --- a/docs/tutorials/model_training/running-multitask-ipu.ipynb +++ /dev/null @@ -1,578 +0,0 @@ -{ - "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) 2023 Graphcore Ltd. All rights reserved." - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Multitask Molecular Modelling with Graphium on the IPU\n", - "\n", - "\n", - "[![Run on Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/sdGggS)\n", - " [![Join our Slack Community](https://img.shields.io/badge/Slack-Join%20Graphcore's%20Community-blue?style=flat-square&logo=slack)](https://www.graphcore.ai/join-community)\n", - "\n" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n", - "Graphium is a high-performance deep learning library specializing in graph representation learning for diverse chemical tasks. Graphium integrates state-of-the-art Graph Neural Network (GNN) architectures and a user-friendly API, enabling the easy construction and training of custom GNN models.\n", - "\n", - "Graphium's distinctive feature is its robust featurization capabilities, enabling the extraction of comprehensive features from molecular structures for a range of applications, including property prediction, virtual screening, and drug discovery.\n", - "\n", - "Applicable to a wide array of chemical tasks like property prediction, toxicity evaluation, molecular generation, and graph regression, Graphium provides efficient tools and models, regardless of whether you're working with simple molecules or intricate chemical graphs.\n", - "\n", - "In this notebook, we'll illustrate Graphium's capabilities using the ToyMix dataset, an amalgamation of the QM9, Tox21, and ZINC12k datasets. We'll delve into the multitask scenario and display how Graphium trains models to concurrently predict multiple properties with chemical awareness.\n", - "\n", - "\n", - "\n", - "### Summary table\n", - "| Domain | Tasks | Model | Datasets | Workflow | Number of IPUs | Execution time |\n", - "|---------|-------|-------|----------|----------|--------------|--------------|\n", - "| Molecules | Multitask | GCN/GIN/GINE | QM9, Zinc, Tox21 | Training, Validation, Inference | recommended: 4x (min: 1x, max: 16x) | 20 mins |\n", - "\n", - "### Learning outcomes\n", - "In this notebook you will learn how to:\n", - "- Run multitask training for a variety of GNN models using the Graphium library on IPUs.\n", - "- Compare the results to the single task performance\n", - "- Learn how to modify the Graphium config files \n", - "\n", - "\n", - "### Links to other resources\n", - "For this notebook, it is best if you are already familiarity with GNNs and PyTorch Geometric. If you need a quick introduction, we suggest you refer to the [Graphcore tutorials for using PyTorch Geometric on the IPU](https://github.com/graphcore/Gradient-Pytorch-Geometric/tree/main/learning-pytorch-geometric-on-ipus), which cover the basics of GNNs and PyTorch Geometric while introducing you to running models on the IPU.\n", - "\n", - "\n", - "[![Join our Slack\n", - "Community](https://img.shields.io/badge/Slack-Join%20Graphcore's%20Community-blue?style=flat-square&logo=slack)](https://www.graphcore.ai/join-community)\n" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Dependencies\n", - "\n", - "We install the following dependencies for this notebook:\n", - "\n", - "(This may take 5 - 10 minutes the first time.)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!pip install git+https://github.com/datamol-io/graphium.git\n", - "!pip install -r requirements.txt\n", - "from examples_utils import notebook_logging\n", - "\n", - "%load_ext gc_logger" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Running on Paperspace\n", - "\n", - "The Paperspace environment lets you run this notebook with no set up. If a problem persists or you want to give us feedback on the content of this notebook, please reach out through our community of developers using [Slack](https://www.graphcore.ai/join-community) or raise a [GitHub issue](https://github.com/graphcore/examples).\n", - "\n", - "Requirements:\n", - "\n", - "* Python packages installed with `pip install -r ./requirements.txt`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# General imports\n", - "import argparse\n", - "import os\n", - "from os.path import dirname, abspath\n", - "from loguru import logger\n", - "from datetime import datetime\n", - "from lightning.pytorch.utilities.model_summary import ModelSummary\n", - "\n", - "# Current project imports\n", - "import graphium\n", - "from graphium.config._loader import (\n", - " load_datamodule,\n", - " load_metrics,\n", - " load_architecture,\n", - " load_predictor,\n", - " load_trainer,\n", - " load_accelerator,\n", - " load_yaml_config,\n", - ")\n", - "from graphium.utils.safe_run import SafeRun\n", - "from graphium.utils.command_line_utils import update_config, get_anchors_and_aliases\n", - "\n", - "import poptorch\n", - "from tqdm.auto import tqdm\n", - "\n", - "# WandB\n", - "import wandb\n", - "\n", - "# Set up the working directory\n", - "MAIN_DIR = dirname(dirname(abspath(graphium.__file__)))\n", - "\n", - "poptorch.setLogLevel(\"ERR\")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Dataset: ToyMix Benchmark\n", - "\n", - "The ToyMix dataset is an amalgamation of three widely recognized datasets in the field of machine learning for chemistry: \n", - "- QM9\n", - "- Tox21\n", - "- ZINC12K. \n", - "\n", - "\n", - "Each of these datasets contributes different aspects, ensuring a comprehensive benchmarking tool.\n", - "\n", - "QM9, known in the realm of 3D GNNs, provides 19 graph-level quantum properties tied to the energy-minimized 3D conformation of molecules. Given that these molecules have at most nine heavy atoms, it's simpler and enables fast model iterations, much like the larger proposed quantum datasets, but on a smaller scale.\n", - "\n", - "The Tox21 dataset is notable among researchers in machine learning for drug discovery. It is a multi-label classification task with 12 labels, exhibiting a high degree of missing labels and imbalance towards the negative class. This mirrors the characteristics of larger datasets, specifically in terms of sparsity and imbalance, making it invaluable for learning to handle real-world, skewed datasets.\n", - "\n", - "Finally, the ZINC12K dataset is recognized in the study of GNN expressivity, an essential feature for large-scale data performance. It's included in ToyMix with the expectation that successful performance on this task will correlate well with success on larger datasets.\n", - "\n", - "## Multitask learning\n", - "\n", - "In multi-task learning, related tasks share information with each other during training, leading to a shared representation. This can often improve the model's performance on individual tasks, as information learned from one task can assist with others.\n", - "Here the ToyMix dataset provides a sandbox to develop these models for molecular tasks.\n" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Download the dataset and splits\n", - "\n", - "The ToyMix dataset requires downloading the three datasets. Here we will create the expected data directory and download both the raw dataset CSV files, and the split files to specify training, validation and test splits. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# And for compatibility with the Paperspace environment variables we will do the following:\n", - "dataset_directory = os.getenv(\"DATASETS_DIR\")\n", - "\n", - "if not os.path.exists(dataset_directory + \"/data/neurips2023/small-dataset\"):\n", - " !wget -P $DATASETS_DIR/data/neurips2023/small-dataset/ https://storage.googleapis.com/graphium-public/datasets/neurips_2023/Small-dataset/qm9.csv.gz\n", - " !wget -P $DATASETS_DIR/data/neurips2023/small-dataset/ https://storage.googleapis.com/graphium-public/datasets/neurips_2023/Small-dataset/Tox21-7k-12-labels.csv.gz\n", - " !wget -P $DATASETS_DIR/data/neurips2023/small-dataset/ https://storage.googleapis.com/graphium-public/datasets/neurips_2023/Small-dataset/ZINC12k.csv.gz\n", - "\n", - " !wget -P $DATASETS_DIR/data/neurips2023/small-dataset/ https://storage.googleapis.com/graphium-public/datasets/neurips_2023/Small-dataset/qm9_random_splits.pt\n", - " !wget -P $DATASETS_DIR/data/neurips2023/small-dataset/ https://storage.googleapis.com/graphium-public/datasets/neurips_2023/Small-dataset/Tox21_random_splits.pt\n", - " !wget -P $DATASETS_DIR/data/neurips2023/small-dataset/ https://storage.googleapis.com/graphium-public/datasets/neurips_2023/Small-dataset/ZINC12k_random_splits.pt\n", - " print(\"Datasets have been successfully downloaded.\")\n", - "else:\n", - " print(\"Datasets are already downloaded.\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from IPython.core.display import Image, display\n", - "from IPython.display import IFrame, display\n", - "\n", - "display(Image(filename=\"ToyMix.png\", width=600))" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The representation of the datasets above shows that QM9 and ZINC are densely populated datasets with regression tasks and each label provided for each molecule. We also see that the Tox21 dataset is sparsely populated with binary labels.\n", - "The relative sizes of these datasets is important. QM9 is approximately 10x larger than the other datasets, and so we would expect it to dominate the training. \n", - "\n", - "It is interesting to note the differences in the datasets, where the molecules are from fairly different domains in terms of their sizes and composition.\n", - "Additionally, the quantum properties of molecules calculated using DFT is a very different type of property compared to results obtained from a wet lab, particularly in the content of complex systems. \n", - "\n", - "\n", - "\n", - "To get a sense of the dataset overlap, a Uniform Manifold Approximation and Projection (UMAP) is shown below. \n", - "Here we can see the 2D projection of the molecule similarity, with the purple QM9 results being largely separate from the Tox21 and ZINC results.\n", - "\n", - "The question now is: how much can the large dataset of quantum properties be used to improve the prediction power of smaller datasets?\n", - "\n", - "This approach of using a larger dataset to improve predictions for smaller datasets has yielded impressive results in the world of large language models, but it is unknown whether the same can be done with molecules. \n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display(Image(filename=\"UMAP.png\", width=600))" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Graphium model configuration\n", - "\n", - "The Graphium model is made up of the following components:\n", - "\n", - "* Processing input features of the molecules\n", - "* A set of encoders used for positional and structural features\n", - "* Featurization to produce a PyG Graph dataset\n", - "* A stack of GNN layers - the backbone of the model - made up of MPNN and transformer layers\n", - "* The output node, graph and edge embeddings. These are processed with MLPs to yield multitask predictions.\n", - "\n", - "This is shown in the flowchart below. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display(Image(filename=\"flowchart.png\", width=900))" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The available configs are shown in the cell below, you can swap between them easily by uncommenting the desired line. \n", - "The first set show the full multitask training on the ToyMix dataset for the three GNN architectures.\n", - "\n", - "The final two configs show the same GCN model trained on only the QM9 and Zinc datasets. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "## Multi-Task Training - ToyMix Dataset\n", - "CONFIG_FILE = {\n", - " \"small_gcn\": \"config_small_gcn.yaml\",\n", - " \"small_gcn_qm9\": \"config_small_gcn_qm9.yaml\",\n", - " \"small_gcn_zinc\": \"config_small_gcn_zinc.yaml\",\n", - " \"small_gin\": \"config_small_gin.yaml\",\n", - " \"small_gine\": \"config_small_gine.yaml\",\n", - "}\n", - "\n", - "cfg = load_yaml_config(CONFIG_FILE[\"small_gcn\"], None)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Updating the config\n", - "\n", - "The Graphium library is designed to be used with a config file. \n", - "This tutorial contains three examples (GCN, GIN, and GINE models) for training on the multitask ToyMix dataset. \n", - "\n", - "The configs are split into groups, with the key options in `accelerator`, `datamodule`, `archtecture`, `predictor`, `metrics` and `trainer`. It is recommended to have a look at the files to see how the model is structured. \n", - "\n", - "For some parameters, it makes more sense to set them in the notebook or via the command line arguments. \n", - "You can also set these values by hand in the notebook, as shown below. \n", - "\n", - "(`--predictor.torch_scheduler_kwargs.max_num_epochs=200` will set the number of epochs). \n", - "\n", - "In the cell below, we can set the number of replicas to speed up training. The default value is 4, but if you are using a POD 16, we can increase the replica count for a significantly faster training. \n", - "\n", - "We can also set the number of epochs and learning rate. For the number of epochs, the `yaml` config file has been loaded into a `dict`, so we can't rely on the anchor and reference system in the YAML config file to keep parameters in sync, so this is done manually here. \n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "cfg[\"accelerator\"][\"ipu_config\"] = [\n", - " \"deviceIterations(5)\",\n", - " \"replicationFactor(4)\", # <------ 4x IPUs for a Pod-4, increase to 16 if using a Pod-16\n", - " \"TensorLocations.numIOTiles(128)\",\n", - " '_Popart.set(\"defaultBufferingDepth\", 128)',\n", - " \"Precision.enableStochasticRounding(True)\",\n", - "]\n", - "\n", - "# For Paperspace we need to update the paths to the downloaded data / splits files\n", - "cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"qm9\"][\"df_path\"] = (\n", - " dataset_directory + \"/\"\n", - " + cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"qm9\"][\"df_path\"]\n", - ")\n", - "cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"qm9\"][\"splits_path\"] = (\n", - " dataset_directory + \"/\"\n", - " + cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"qm9\"][\"splits_path\"]\n", - ")\n", - "cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"tox21\"][\"df_path\"] = (\n", - " dataset_directory + \"/\"\n", - " + cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"tox21\"][\"df_path\"]\n", - ")\n", - "cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"tox21\"][\"df_path\"] = (\n", - " dataset_directory + \"/\"\n", - " + cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"tox21\"][\"df_path\"]\n", - ")\n", - "cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"zinc\"][\"splits_path\"] = (\n", - " dataset_directory + \"/\"\n", - " + cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"zinc\"][\"splits_path\"]\n", - ")\n", - "cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"zinc\"][\"splits_path\"] = (\n", - " dataset_directory + \"/\"\n", - " + cfg[\"datamodule\"][\"args\"][\"task_specific_args\"][\"zinc\"][\"splits_path\"]\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Key Hyper-Parameters:\n", - "Shown here for simplicity - for testing you can change the values here / add other parameters\n", - "When building a full training script it is recommended to use either the config file or the command line to set values. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "cfg[\"trainer\"][\"trainer\"][\"max_epochs\"] = 5\n", - "cfg[\"predictor\"][\"torch_scheduler_kwargs\"][\"max_num_epochs\"] = cfg[\"trainer\"][\n", - " \"trainer\"\n", - "][\"max_epochs\"]\n", - "cfg[\"predictor\"][\"optim_kwargs\"][\"lr\"] = 4e-5" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Initialize the accelerator\n", - "cfg, accelerator_type = load_accelerator(cfg)\n", - "wandb.init(mode=\"disabled\")\n", - "\n", - "# Load and initialize the dataset\n", - "datamodule = load_datamodule(cfg, accelerator_type)\n", - "\n", - "# Initialize the network\n", - "model_class, model_kwargs = load_architecture(\n", - " cfg,\n", - " in_dims=datamodule.in_dims,\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "First build the predictor - this is the model, look at the printed summary to see the layer types and number of parameters. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "datamodule.prepare_data()\n", - "\n", - "metrics = load_metrics(cfg)\n", - "logger.info(metrics)\n", - "\n", - "predictor = load_predictor(\n", - " cfg,\n", - " model_class,\n", - " model_kwargs,\n", - " metrics,\n", - " datamodule.get_task_levels(),\n", - " accelerator_type,\n", - " datamodule.featurization,\n", - " datamodule.task_norms\n", - ")\n", - "logger.info(predictor.model)\n", - "logger.info(ModelSummary(predictor, max_depth=4))" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Then build the trainer - which controls the actual training loop. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "date_time_suffix = datetime.now().strftime(\"%d.%m.%Y_%H.%M.%S\")\n", - "run_name = \"main\"\n", - "\n", - "trainer = load_trainer(cfg, run_name, accelerator_type, date_time_suffix)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train the model\n", - "\n", - "Now we can start training the model. \n", - "The training is handled with PyTorch Lightning making it simple to set up the training loop. \n", - "\n", - "This will set up the training loop after calculating the maximum number of nodes and edges in the batch. We use fixed size batches on the IPU so we need to add some padding to account for variable molecule sizes before we compile the model. \n", - "\n", - "The training is displayed in a progress bar. \n", - "\n", - "**Exercise for the reader:** \n", - "We currently only run training for 10 epochs to illustrate training the model and so you can see the validation and test output format. Try increasingthe number of epochs for better results. Using around 30 epochs should give results ~95% of the performance of the model trained for a full 300 epochs." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Determine the max num nodes and edges in training and validation\n", - "logger.info(\"About to set the max nodes etc.\")\n", - "predictor.set_max_nodes_edges_per_graph(datamodule, stages=[\"train\", \"val\"])\n", - "\n", - "# Run the model training\n", - "with SafeRun(\n", - " name=\"TRAINING\", raise_error=cfg[\"constants\"][\"raise_train_error\"], verbose=True\n", - "):\n", - " trainer.fit(model=predictor, datamodule=datamodule)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Validation and test dataset splits\n", - "\n", - "Finally we can look at the validation and test dataset splits, where the trained model is loaded from the checkpoint and evaluated against the hold out sets. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Determine the max num nodes and edges in training and validation\n", - "predictor.set_max_nodes_edges_per_graph(datamodule, stages=[\"val\"])\n", - "\n", - "# Run the model validation\n", - "with SafeRun(\n", - " name=\"VALIDATING\", raise_error=cfg[\"constants\"][\"raise_train_error\"], verbose=True\n", - "):\n", - " trainer.validate(model=predictor, datamodule=datamodule)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Run the model testing\n", - "with SafeRun(\n", - " name=\"TESTING\", raise_error=cfg[\"constants\"][\"raise_train_error\"], verbose=True\n", - "):\n", - " trainer.test(\n", - " model=predictor,\n", - " datamodule=datamodule,\n", - " )" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Conclusions and next steps\n", - "We have demonstrated how you can use Graphium for multitask molecular modelling on the IPU. We used the ToyMix dataset, which combines the QM9, Tox21, and ZINC12k datasets to show how Graphium trains models to concurrently predict multiple properties with chemical awareness.\n", - "\n", - "Looking at the validation and test results we can see the impact of the larger QM9 dataset in improving the single task performance of the smaller datasets. The accuracy and MAE scores are much better than we would expect just from training on the small datasets, confirming the training benefit from a multitask setting even on a very short training run. \n", - "To further verify this we can look at the same model trained for the full number of epochs on the multitask dataset and the individual datasets and compare the final validation results. \n", - "\n", - "You can try out the following:\n", - "- Increase the number of epochs for training to around 30. This should yield reasonably good results. If you train for 300 epochs, your results should match the benchmarking results. \n", - "- Change the config file to use a different model. GIN will achieve higher accuracy than GCN, and GINE will have a higher accuracy than GIN. \n", - "- Adapt the multitask config to only use a single dataset. You can use this to compare the results of the multitask performance with a single task performance. Rremember to remove the relevant sections from the dataset, the losses and the metrics. Example YAML config files showing how to do this for GCN with the QM9 (`config_small_gcn_qm9.yaml`) and ZINC12K (`config_small_gcn_zinc.yaml`) datasets are given. \n", - "- Explore more molecular featurisers in the `molfeat` notebook [`pytorch_geometric_molfeat.ipynb`](https://github.com/graphcore/Gradient-Pytorch-Geometric/blob/main/molfeat/pytorch_geometric_molfeat.ipynb)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.8.10" - }, - "vscode": { - "interpreter": { - "hash": "7271e66cf1cf0fbfa3565d6b47386ec8943d89b503a32224c0195197293fc6bd" - } - } - }, - "nbformat": 4, - "nbformat_minor": 4 -}