This repository contains the official implementation of M3-Net, a multi-scale nuclei segmentation model that uses contextual patches and attention mechanisms for breast cancer histopathology images.
The code corresponds to the method described in:
M3-Net: A Multi-Scale Nuclei Segmentation Model for Breast Cancer Histopathology Using Contextual Patches and Attention Mechanism Presented at IEEE ISBI 2025.
M3-Net is designed for robust nuclei segmentation by:
- Extracting multi-scale contextual patches (large, medium, small) from each image.
- Using a shared VGG16-based encoder for all scales.
- Fusing features across scales with channel attention.
- Decoding features with a U-Net–style decoder to produce segmentation masks.
- Optionally applying watershed-based instance separation on the predicted masks.
The provided implementation is in a single Jupyter notebook and is configured to run easily on Kaggle (multi-GPU via tf.distribute.MirroredStrategy) or any local machine with GPU support.
Typical structure:
├── m3-net-isbi2025.ipynb # Main notebook: data loading, model, training, evaluation
└── README.md # This file
All helper functions (patch extraction, resizing, model definition, post-processing, etc.) are defined inside the notebook.
By default, the notebook is written to work with the MoNuSAC nuclei segmentation dataset on Kaggle.
Expected structure:
MoNuSac/
├── images/ # H&E image tiles
└── binary_masks/ # Corresponding binary nuclei masks (same names as images)
In the notebook you will see:
# Load and preprocess training data
train_folder = "/kaggle/input/monusac-public/MoNuSac"
train_images, train_masks = load_and_preprocess_data(
train_folder,
"images",
"binary_masks"
)To use your own dataset:
- Organize images and masks into two folders (e.g.,
images,masks). - Make sure image–mask filenames correspond.
- Change
train_folder, and the two subfolder names ("images","binary_masks") to match your structure. - Ensure masks are binary (nuclei vs background) or adapt the preprocessing for multiclass masks.
Tested with:
-
Python 3.8+
-
TensorFlow 2.x (with Keras)
-
CUDA-enabled GPU (optional but recommended)
-
Libraries imported in the notebook:
numpyopencv-python(cv2)matplotlibscikit-learnscikit-imageseaborn
Example installation (conda):
conda create -n m3net python=3.9
conda activate m3net
pip install tensorflow-gpu
pip install numpy opencv-python matplotlib scikit-learn scikit-image seabornOn Kaggle, most of these packages are already available.
-
Create a new Kaggle Notebook.
-
Upload
m3-net-isbi2025.ipynbor copy its contents. -
Add the MoNuSAC dataset (or your own) as a dataset input.
-
Make sure the dataset path in the cell:
train_folder = "/kaggle/input/monusac-public/MoNuSac"
matches the mounted dataset path.
-
Enable GPU (or multi-GPU if available).
-
Run all cells in order:
- Data loading and patch generation
- Multi-scale patch extraction
- Model definition (VGG16 backbone + attention + decoder)
- Training (
model.fit(...)) - Evaluation, qualitative results, and instance post-processing
During training:
-
The best model is saved using
ModelCheckpointto:filepath = "/kaggle/working/Binary_model_ER_IHC.keras"
You can change this path if needed.
-
Install dependencies and clone/download this repository.
-
Place your dataset in a local folder and update
train_folderaccordingly. -
Start Jupyter:
jupyter notebook
-
Open
m3-net-isbi2025.ipynband run the cells sequentially.
If you do not have multiple GPUs, MirroredStrategy will still work with a single GPU or fall back to CPU.
The notebook will:
-
Train M3-Net and report training/validation:
- Loss
- Dice / IoU or related segmentation metrics
-
Save the best model as
.keras. -
Plot example:
- Input histopathology patches (multi-scale)
- Ground-truth masks
- Predicted masks
-
Optionally perform watershed-based instance segmentation to separate touching nuclei and visualize contours.
If you use this code, model, or any part of the workflow in your research, please cite:
@inproceedings{sufyan2025m3,
title={M3-Net: A Multi-Scale Nuclei Segmentation Model for Breast Cancer Histopathology Using Contextual Patches and Attention Mechanism},
author={Sufyan, Arbab and Fauzi, Mohammad Faizal Ahmad and Kuan, Wong Lai},
booktitle={2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI)},
pages={1--4},
year={2025},
organization={IEEE}
}