Implementation of the paper (under review):
CAMP: Classify Anything Model in Pathology
Anh Tien Nguyen, Keunho Byeon, Kyungeun Kim, Boram Song, Seoung Wan Chae, and Jin Tae Kwak
There exist numerous diagnostic tasks in pathology. Conventional computational pathology formulates and tackles them as independent and individual image classification problems, thereby resulting in computational inefficiency and high costs. To address the challenges, we propose a generic, unified, and universal framework, called a continuous and adaptive learning model in pathology (CAMP), for pathology image classification. CAMP is a generative, efficient, and adaptive classification model that can continuously adapt to any classification task by leveraging pathology-specific prior knowledge and learning task-specific knowledge with minimal computational cost and without forgetting the knowledge from the existing tasks. We evaluated CAMP on 22 datasets, including 1,171,526 patches and 11,811 pathology slides, across 17 classification tasks. CAMP achieves state-of-the-art classification performance on a wide range of datasets and tasks at both patch- and slide-levels and reduces up to 94% of computation time and 85% of storage memory in comparison to the conventional classification models. Our results demonstrate that CAMP can offer a fundamental transformation in pathology image classification, paving the way for the fully digitized and computerized pathology practice.
git clone https://github.com/QuIIL/CAMP
cd CAMP
conda create --name CAMP --file requirements.txt
conda activate CAMP
pip install -r requirements.txt
- Colon-1 and Colon-2: link
- UHU: link
- UBC: link
- AGGC: link
- Gastric: link
- K19 and K16: link
- PANDA: link
- WSSS4LUAD: link
- Kidney: link
- Liver: link
- Bladder: link
- BACH: link
- PCam: link
- HunCRC_P: link
- HunCRC_W: link
- BRACS: link
- DHMC: link
- UniToPatho: link
- CAMELYON16: link
- ConvNeXt-B: link
- RegNet: link
- ResNeXt50: link
- SwinV2-B: link
- ViT-B: link
- PLIP: link
- CTransPath: link
- UNI: link
- Phikon: link
- GPC: link
- GIT-B: link
The code for training is mainly based on the file train.py
.
The arguments are important for the training setting includes dataset
(dataset to train), lora_r
(rank of LoRA), lora_alpha
(alpha of LoRA), and out_dir
to save the training results. Please refer to the file train.py
for the default arguments of other arguments.
Sample command for training with colon-1.
python train.py \
--dataset colon-1
--device 0
--lora_r 6
--lora_alpha 12
--out_dir <train_result_saving_dir>
The code for training is mainly based on the file test.py
.
The arguments are important for the training setting includes dataset
(dataset to test), model_pth
(path of a test model), and out_dir
to save the testing results. Please refer to the file test.py
for the default arguments of other arguments.
Sample command for testing with colon-1.
python test.py \
--dataset colon-1
--device 0
--model_pth <ckpt_path>
--out_dir <test_result_saving_dir>