Skip to content

Releases: AtlasAnalyticsLab/AtlasPatch

v1.0.0

03 Feb 18:46

Choose a tag to compare

AtlasPatch v1.0.0 Release Notes

We are excited to share the first public release of AtlasPatch!!

What is AtlasPatch?

AtlasPatch is a WSI preprocessing tool built for computational pathology workflows. We got tired of existing tools being either too slow (patch-level AI detectors) or too inaccurate (thresholding heuristics), so we built something that's both fast and accurate.

The core idea: run tissue segmentation on thumbnails using a finetuned SAM2 model, then extrapolate coordinates to full resolution. This gives you the accuracy of deep learning without the computational overhead of processing every patch.

Highlights

Fast tissue segmentation that actually works

We finetuned SAM2 on ~35,000 WSI thumbnails spanning different organs, scanners, and institutions. The result is a tissue detector that handles edge cases well—fragmented tissue, pen marks, bubbles, blurry regions—without the usual heuristic failures.

Processing 100 slides takes ~19 seconds on our benchmarks. CLAM takes ~42s, Trident-Hest takes ~328s for the same set.

66 feature extractors out of the box

Pretty much every pathology encoder you'd want is already integrated:

  • UNI v1/v2, Phikon v1/v2, Virchow v1/v2
  • GigaPath, CONCH, MUSK, Midnight, PathOrchestra
  • DINOv2, DINOv3 (including the 7B models)
  • All the CLIP variants (PLIP, Quilt, BiomedCLIP, etc.)
  • Standard backbones (ResNet, ViT, ConvNeXt)

If your encoder isn't in the list, the plugin system lets you add it without touching our code.

Designed for scale

  • Per-slide lock files so you can run 50 SLURM jobs at once without conflicts
  • Configurable batch sizes and worker counts
  • Fast mode skips per-patch filtering for maximum throughput
  • HDF5 output keeps coordinates and features compact

Installation

# Install AtlasPatch
pip install atlas-patch

# Install SAM2 (required for tissue segmentation)
pip install git+https://github.com/facebookresearch/sam2.git

You'll also need OpenSlide installed on your system. If you're using conda:

conda install -c conda-forge openslide

Optional encoders (install only if needed):

pip install git+https://github.com/Mahmoodlab/CONCH.git  # for CONCH
pip install git+https://github.com/lilab-stanford/MUSK.git  # for MUSK

Quick start

Process a directory of slides with UNI embeddings:

atlaspatch process /path/to/slides \
  --output ./output \
  --patch-size 256 \
  --target-mag 20 \
  --feature-extractors uni_v2 \
  --device cuda

Just want coordinates without embeddings? Use segment-and-get-coords instead of process.

What's next

We are planning to add slide-level encoders (TITAN, PRISM, GigaPath slide encoder, Madeleine) in a future release.

Links

Issues?

If something breaks or you have questions, open an issue on GitHub. We've added an FAQ section to the README covering common problems (OOM errors, CUDA memory, gated model access, etc.).


Thanks for trying AtlasPatch. We hope it saves you some time!!