Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8110 update highlights page for v1.4 #8111

Merged
merged 21 commits into from
Oct 14, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/source/whatsnew.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ What's New
.. toctree::
:maxdepth: 1

whatsnew_1_4.md
whatsnew_1_3.md
whatsnew_1_2.md
whatsnew_1_1.md
Expand Down
2 changes: 1 addition & 1 deletion docs/source/whatsnew_1_3.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# What's new in 1.3 🎉🎉
# What's new in 1.3

- Bundle usability enhancements
- Integrating MONAI Generative into MONAI core
Expand Down
39 changes: 39 additions & 0 deletions docs/source/whatsnew_1_4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# What's new in 1.4 🎉🎉

- MAISI: state-of-the-art 3D Latent Diffusion Model
- VISTA3D: interactive foundation model for segmenting and anotating human anatomies
KumoLiu marked this conversation as resolved.
Show resolved Hide resolved
- Integrating MONAI Generative into MONAI core
- Geometric Data Support


## MAISI: state-of-the-art 3D Latent Diffusion Model

NVIDIA MAISI (Medical AI for Synthetic Imaging) is a state-of-the-art three-dimensional (3D) Latent Diffusion Model designed for generating high-quality synthetic CT images with or without anatomical annotations. This AI model excels in data augmentation and creating realistic medical imaging data to supplement limited datasets due to privacy concerns or rare conditions. It can also significantly enhance the performance of other medical imaging AI models by generating diverse and realistic training data.
KumoLiu marked this conversation as resolved.
Show resolved Hide resolved

A tutorial for generating large CT images accompanied by corresponding segmentation masks using MAISI is provided within
[`project-monai/tutorials`](https://github.com/Project-MONAI/tutorials/blob/main/generation/maisi).
And it contains the following features:
KumoLiu marked this conversation as resolved.
Show resolved Hide resolved
- A Foundation Variational Auto-Encoder (VAE) model for latent feature compression that works for both CT and MRI with flexible volume size and voxel size
KumoLiu marked this conversation as resolved.
Show resolved Hide resolved
- A Foundation Diffusion model that can generate large CT volumes up to 512 × 512 × 768 size, with flexible volume size and voxel size
- A ControlNet to generate image/mask pairs that can improve downstream tasks, with controllable organ/tumor size

## VISTA-3D: state-of-the-art 3D Latent Diffusion Model

VISTA-3D is a specialized interactive foundation model for 3D medical imaging. It excels in providing accurate and adaptable segmentation analysis across anatomies and modalities. Utilizing a multi-head architecture, VISTA-3D adapts to varying conditions and anatomical areas, helping guide users' annotation workflow.

A tutorial show how to finetune VISTA3D on spleen dataset is provided within
[`project-monai/tutorials`](https://github.com/Project-MONAI/tutorials/blob/main/vista_3d).
It can support three core workflows:
KumoLiu marked this conversation as resolved.
Show resolved Hide resolved
- Segment everything: Enables whole body exploration, crucial for understanding complex diseases affecting multiple organs and for holistic treatment planning.
- Segment using class: Provides detailed sectional views based on specific classes, essential for targeted disease analysis or organ mapping, such as tumor identification in critical organs.
- Segment point prompts: Enhances segmentation precision through user-directed, click-based selection. This interactive approach accelerates the creation of accurate ground-truth data, essential in medical imaging analysis.

## Integrating MONAI Generative into MONAI Core

Key modules originally developed in the [MONAI GenerativeModels](https://github.com/Project-MONAI/GenerativeModels) have been integrated into the core MONAI codebase. This integration ensures consistent maintenance and streamlined release of essential components for generative AI. In this version, all utilities, networks, diffusion schedulers, inferers, and engines have been migrated to the Core.
KumoLiu marked this conversation as resolved.
Show resolved Hide resolved

Additionally, several tutorials have been ported and are available within [`project-monai/tutorials`](https://github.com/Project-MONAI/tutorials/blob/main/generation)

## Geometric Data Support

MONAI introduces support for geometric data transformations as a key feature. As a starting point, ApplyTransformToPoints transform is added to facilitate matrix operations on points, enabling flexible and efficient handling of geometric transformations. Alongside this, the framework now supports conversions between boxes and points, providing seamless interoperability within detection pipelines. These updates have been integrated into existing pipelines, such as the [detection tutorial](https://github.com/Project-MONAI/tutorials/blob/main/detection) and the [3D registration workflow](https://github.com/Project-MONAI/tutorials/blob/main/3d_registration/learn2reg_nlst_paired_lung_ct.ipynb), leveraging the latest APIs for improved functionality.
Loading