MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders
This repository contains the official PyTorch implementation for MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders (MIDL 2025; Best Oral Paper Award).
MedVAE is a family of six large-scale, generalizable 2D and 3D variational autoencoders (VAEs) designed for medical imaging. It is trained on over one million medical images across multiple anatomical regions and modalities. MedVAE autoencoders encode medical images as downsized latent representations and decode latent representations back to high-resolution images. Across diverse tasks obtained from 20 medical image datasets, we demonstrate that utilizing MedVAE latent representations in place of high-resolution images when training downstream models can lead to efficiency benefits (up to 70x improvement in throughput) while simultaneously preserving clinically-relevant features.
To install MedVAE, you can simply run:
pip install medvaeFor an editable installation, use the following commands to clone and install this repository.
git clone https://github.com/StanfordMIMI/MedVAE.git
cd MedVAE
pip install -e .[dev]
pre-commit install
pre-commitimport torch
from medvae import MVAE
fpath = "documentation/data/mmg_data/isJV8hQ2hhJsvEP5rdQNiy.png"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = MVAE(model_name="medvae_4_3_2d", modality="xray").to(device)
img = model.apply_transform(fpath).to(device)
model.requires_grad_(False)
model.eval()
with torch.no_grad():
latent = model(img)We also developed an easy-to-use CLI inference tool for compressing your high-dimensional medical images into usable latents:
medvae_inference -i INPUT_FOLDER -o OUTPUT_FOLDER -model_name MED_VAE_MODEL -modality MODALITYFor more information, please check our inference documentation and demo.
Easily finetune MedVAE on your own dataset! Follow the instructions below (requires Python 3.9 and cloning the repository).
Run the following commands depending on your finetuning scenario:
Stage 1 (2D) Finetuning
medvae_finetune experiment=medvae_4x_1c_2d_finetuningStage 2 (2D) Finetuning:
medvae_finetune_s2 experiment=medvae_4x_1c_2d_s2_finetuningStage 2 (3D) Finetuning:
medvae_finetune experiment=medvae_4x_1c_3d_finetuningThis setup supports multi-GPU training and includes integration with Weights & Biases for experiment tracking.
For detailed finetuning guidelines, see the Finetuning Documentation.
To create classification models using downsized latent representations, refer to the Classification Documentation.
If you find this repository useful for your work, please cite the following paper:
@misc{varma2025medvaeefficientautomatedinterpretation,
title={MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders},
author={Maya Varma and Ashwin Kumar and Rogier van der Sluijs and Sophie Ostmeier and Louis Blankemeier and Pierre Chambon and Christian Bluethgen and Jip Prince and Curtis Langlotz and Akshay Chaudhari},
year={2025},
eprint={2502.14753},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2502.14753},
}This repository is powered by Hydra and HuggingFace Accelerate. Our implementation of MedVAE is inspired by prior work on diffusion models from CompVis and Stability AI.
