Skip to content

Fix typo and missing description of content in folder #2004

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 24, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 14 additions & 3 deletions 3d_regression/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,18 @@
3D Regression
=============

How to run the 3D regression tutorial.
--------------------------------------
This directory contains a tutorial demonstrating how to use MONAI for 3D regression tasks, specifically brain age prediction using the IXI dataset and a DenseNet3D architecture.

Running this notebook is straightforward. It works well in Colab.
## Tutorial Overview

The `densenet_training_array.ipynb` notebook provides an end-to-end example of:
- Loading and preprocessing 3D brain MRI data
- Setting up data transforms for regression tasks
- Training a DenseNet3D model for age prediction
- Evaluating model performance on test data

## How to Run

This notebook can be run locally with Jupyter or in Google Colab. The notebook includes all necessary setup instructions and dependency installations.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/3d_regression/densenet_training_array.ipynb)
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Training and evaluation examples of 3D regression based on DenseNet3D and [IXI d
#### <ins>**3D segmentation**</ins>
##### [ignite examples](./3d_segmentation/ignite)
Training and evaluation examples of 3D segmentation based on UNet3D and synthetic dataset.
The examples are PyTorch Ignite programs and have both dictionary-base and array-based transformations.
The examples are PyTorch Ignite programs and have both dictionary-based and array-based transformations.
##### [torch examples](./3d_segmentation/torch)
Training, evaluation and inference examples of 3D segmentation based on UNet3D and synthetic dataset.
The examples are standard PyTorch programs and have both dictionary-based and array-based versions.
Expand Down
2 changes: 1 addition & 1 deletion acceleration/distributed_training/distributed_training.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ torchrun --nproc_per_node=NUM_GPUS_PER_NODE --nnodes=NUM_NODES brats_training_dd

## Multi-Node Training

Let's take two-node (16 GPUs in total) model training as an example. In the primary node (node rank 0), we run the following command.
Let's take a two-node (16 GPUs in total) model training example. In the primary node (node rank 0), we run the following command.

```
torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=PRIMARY_NODE_IP --master_port=1234 brats_training_ddp.py
Expand Down
6 changes: 3 additions & 3 deletions nnunet/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# MONAI and nnU-Net Integration

[nnU-Net](https://github.com/MIC-DKFZ/nnUNet) is an open-source deep learning framework that has been specifically designed for medical image segmentation. And nnU-Net is a state-of-the-art deep learning framework that is tailored for medical image segmentation. It builds upon the popular U-Net architecture and incorporates various advanced features and improvements, such as cascaded networks, novel loss functions, and pre-processing steps. nnU-Net also provides an easy-to-use interface that allows users to train and evaluate their segmentation models quickly. nnU-Net has been widely used in various medical imaging applications, including brain segmentation, liver segmentation, and prostate segmentation, among others. The framework has consistently achieved state-of-the-art performance in various benchmark datasets and challenges, demonstrating its effectiveness and potential for advancing medical image analysis.
[nnU-Net](https://github.com/MIC-DKFZ/nnUNet) is an open-source deep learning framework that has been specifically designed for medical image segmentation. nnU-Net is a state-of-the-art deep learning framework that is tailored for medical image segmentation. It builds upon the popular U-Net architecture and incorporates various advanced features and improvements, such as cascaded networks, novel loss functions, and pre-processing steps. nnU-Net also provides an easy-to-use interface that allows users to train and evaluate their segmentation models quickly. nnU-Net has been widely used in various medical imaging applications, including brain segmentation, liver segmentation, and prostate segmentation, among others. The framework has consistently achieved state-of-the-art performance in various benchmark datasets and challenges, demonstrating its effectiveness and potential for advancing medical image analysis.

nnU-Net and MONAI are two powerful open-source frameworks that offer advanced tools and algorithms for medical image analysis. Both frameworks have gained significant popularity in the research community, and many researchers have been using these frameworks to develop new and innovative medical imaging applications.

Expand Down Expand Up @@ -73,7 +73,7 @@ Users can also set values of directory variables as options in "input.yaml" if a
dataset_name_or_id: 1 # task-specific integer index (optional)
nnunet_preprocessed: "./work_dir/nnUNet_preprocessed" # directory for storing pre-processed data (optional)
nnunet_raw: "./work_dir/nnUNet_raw_data_base" # directory for storing formated raw data (optional)
nnunet_results: "./work_dir/nnUNet_trained_models" # diretory for storing trained model checkpoints (optional)
nnunet_results: "./work_dir/nnUNet_trained_models" # directory for storing trained model checkpoints (optional)
```

Once the minimum input information is provided, the user can use the following commands to start the process of the entire nnU-Net pipeline automatically (from model training to model ensemble).
Expand Down Expand Up @@ -143,7 +143,7 @@ python -m monai.apps.nnunet nnUNetV2Runner predict_ensemble_postprocessing --inp
--run_predict false --run_ensemble false
```

For utilizing PyTorch DDP in multi-GPU training, the subsequent command is offered to facilitate the training of a singlular model on a specific fold:
For utilizing PyTorch DDP in multi-GPU training, the subsequent command is offered to facilitate the training of a singular model on a specific fold:

```bash
## [component] multi-gpu training for a single model
Expand Down
2 changes: 1 addition & 1 deletion pathology/tumor_detection/README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Description

Here we use a classification model to classify small batches extracted from very large whole-slide histopathology images. Since the patches are very small compare to the whole image, we can then use this model for the detection of tumors in a different area of a whole-slide pathology image.
Here we use a classification model to classify small batches extracted from very large whole-slide histopathology images. Since the patches are very small compared to the whole image, we can then use this model for the detection of tumors in a different area of a whole-slide pathology image.

## Model Overview

Expand Down
2 changes: 1 addition & 1 deletion vista_2d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The tutorial demonstrates how to train a cell segmentation model using the [MONA

![image](../figures/vista_2d_overview.png)

In Summary the tutorial covers the following:
In summary, the tutorial covers the following:
- Initialization of the CellSamWrapper model with pre-trained SAM weights
- Creation of data lists for training, validation, and testing
- Definition of data transforms for training and validation
Expand Down
4 changes: 2 additions & 2 deletions vista_3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ The **VISTA3D** is a foundation model trained systematically on 11,454 volumes e

The tutorial demonstrates how to finetune the VISTA3D model on user data, where we use the MSD Task09 Spleen as the example.

In Summary the tutorial covers the following:
In summary, the tutorial covers the following:
- Creation of datasets and data transforms for training and validation
- Create and VISTA3D model and load the pretrained checkpoint
- Create a VISTA3D model and load the pretrained checkpoint
- Implementation of the finetuning loop
- Mixed precision training with GradScaler
- Visualization of training loss and validation accuracy
Expand Down
Loading