SlicerRoboTMS is an open-source 3D Slicer extension that provides a unified interaction infrastructure for Robot-assisted Transcranial Magnetic Stimulation (Robo-TMS) research. By leveraging 3D Slicer's medical imaging and visualisation capabilities, the extension supports MRI-based neuronavigation and interfaces with robotic systems through standardised communication protocols and configurable system descriptions.
The extension is organised as three independent functional modules — Calibration, Registration, and Navigation — which together support a complete Robo-TMS workflow. SlicerRoboTMS is designed to support diverse hardware configurations and rapid prototyping, lowering the barrier to entry and facilitating reproducible and extensible research in Robo-TMS.
This extension is described in the following paper, accepted at EMBC 2026. Before the publication in IEEE Xplore, we provided an arXiv preprint for open access. If you use SlicerRoboTMS in your research, please cite:
@inproceedings{bai2026slicerrobotms,
title = {{SlicerRoboTMS}: An Open-Source {3D} {Slicer} Extension for Robot-Assisted Transcranial Magnetic Stimulation},
author = {Bai, Wenzhi and Guo, Yituo and Basu, Bhaskar and Weightman, Andrew and Li, Zhenhong},
booktitle = {2026 48th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year = {2026},
volume = {},
number = {},
pages = {1-6},
publisher = {IEEE},
eprint = {2604.25661},
archivePrefix = {arXiv},
doi = {10.48550/arXiv.2604.25661},
url = {https://github.com/OpenRoboTMS/SlicerRoboTMS}
}- Openness and Extensibility: Fully open-source with example implementations of three functional modules — hand-eye calibration, anatomical landmark-based MRI-to-patient registration, and real-time MRI-guided neuronavigation — supporting rapid prototyping, reproducible research, and community-driven development.
- Modularity and Compatibility: Follows the modular design principles of 3D Slicer, leveraging its established data structures and communication protocols for seamless integration with existing tools and broader research workflows.
- Hardware Agnosticism: Standardised communication interfaces and configuration-based hardware descriptions decouple image-guided workflows from hardware-specific implementations, enabling interfacing with a wide range of external devices.
- Real-time 3D Visualisation: Event-driven data exchange and configurable update rates (30 Hz) support real-time visualisation of robotic poses, target locations, and anatomical context, with robot geometry initialised from URDF.
SlicerRoboTMS operates within a three-layer architecture:
Application layer (SlicerRoboTMS): handles MRI-based neuronavigation, system monitoring, and user interaction within 3D Slicer.
Integration layer (ROS): encapsulates system-specific algorithms — tracking, calibration, registration, and robotic control — and communicates with the application layer via OpenIGTLink through the ROS-IGTL-Bridge.
Physical layer: tracking devices and robotic manipulators, accessed indirectly through the integration layer.
All communication between layers uses OpenIGTLink over TCP/IP (default port 18944). Three message types are used:
| Type | Description |
|---|---|
TRANSFORM |
Real-time spatial data: robot joint poses, tracking transforms, camera calibration |
POLYDATA |
Geometrical objects: registration fiducials, landmark point clouds |
STATUS |
System states and execution commands: readiness flags, calibration/registration completion |
Scene updates are driven by a Qt timer at 30 Hz, matching the refresh frequency of optical tracking systems.
Performs hand-eye calibration between the Intel RealSense D455 tracking camera and the Franka Research 3 robot base. The calibration computes the rigid transformation that aligns the camera coordinate frame with the robot base frame, using AprilTag fiducial markers attached to the robot end-effector.
Workflow:
- Prerequisites verified: FR3 ready, VINS ready, end-effector marker detected
- User clicks Start Calibration — command sent to ROS via
start_calibrationSTATUS message - ROS calibrator node computes hand-eye transform from multiple viewpoints
- On completion (
calib_complete = true), camera model becomes visible in the 3D scene
Aligns the MRI image coordinate space with the physical patient/phantom space using five anatomical landmarks. The predefined landmark positions in the head model coordinate frame are sent to ROS as a POLYDATA point cloud, and the physical landmark positions are sampled interactively using a registration stylus.
Workflow:
- Prerequisites verified: head marker ready, registration stylus ready
- User samples each of the five landmarks in turn
- User clicks Start Registration — landmark reference point cloud sent via OpenIGTLink
- ROS register node computes MRI-to-physical transform
- On completion (
regist_complete = true), head marker and registration stylus models become visible
Provides real-time MRI-guided navigation for TMS procedures. The user selects a target plane on the MRI volume in 3D Slicer; the plane origin and normal are used to construct a target pose matrix that is sent to the robot. The module also maintains real-time MRI slice views that track head movement.
Workflow:
- Prerequisites verified: all prior steps complete (FR3, VINS, marker, calibration, registration)
- User creates a markup plane on the MRI volume to define the coil target
- Send Target — target pose transform sent to ROS tracker node
- Start Navigation — robot begins moving; MRI slices track head movement in real time
git clone https://github.com/OpenRoboTMS/SlicerRoboTMS.git
cd SlicerRoboTMS
git submodule update --init --recursiveSee INSTALLATION.md for complete step-by-step instructions.
Quick summary:
- Install Ubuntu 20.04 with real-time kernel (for robot control)
- Install 3D Slicer 5.8.1 or later with the SlicerOpenIGTLink extension
- Set up ROS Noetic and build the ROS-IGTL-Bridge
- Clone SlicerRoboTMS and register it with 3D Slicer's module paths
- Launch the ROS-IGTL-Bridge on the integration machine:
roslaunch ros_igtl_bridge bridge.launch
- Navigate to the SlicerRoboTMS category in the module selector
- Work through Calibration → Registration → Navigation in sequence
| Dependency | Version | Notes |
|---|---|---|
| Ubuntu | 20.04 LTS | Real-time kernel patches recommended for robot control |
| 3D Slicer | 5.8.1+ | With SlicerOpenIGTLink extension |
| ROS | Noetic | Required for integration layer |
| Python | 3.8+ | Bundled with Slicer |
| Component | Model used in paper |
|---|---|
| Tracking camera | Intel RealSense D455 (1280×800 px, 30 Hz) |
| Fiducial markers | AprilTags (customisable) |
| Robot manipulator | Franka Research 3 |
| Host computer | Intel Core i9, 64 GB RAM |
The extension is designed for hardware agnosticism — alternative cameras, robots, and tracking systems can be integrated by adapting the ROS integration layer without modifying the SlicerRoboTMS extension itself.
All configurable parameters are in SlicerRoboTMS/Config.py:
IGTL_CONFIG = {
"host": "127.0.0.1", # IP of the ROS-IGTL-Bridge machine
"port": 18944, # OpenIGTLink port
"timeout": 5000, # Connection timeout (ms)
}
UPDATE_RATES = {
"update_from_ros": 30, # State polling frequency (Hz)
}Model files, MRI data paths, and visualisation colours are also defined there.
SlicerRoboTMS/
├── LICENSE
├── README.md
├── INSTALLATION.md
├── CONTRIBUTING.md
│
├── SlicerRoboTMS/ # Main extension modules
│ ├── Calibration/ # Hand-eye calibration
│ │ ├── Calibration.py # Module + CalibrationTest
│ │ ├── Resources/ # UI file and icon
│ │ └── Testing/Python/ # CMake test registration
│ ├── Navigation/ # MRI-guided navigation
│ │ ├── Navigation.py # Module + NavigationTest
│ │ ├── Resources/
│ │ └── Testing/Python/
│ ├── Registration/ # Landmark-based registration
│ │ ├── Registration.py # Module + RegistrationTest
│ │ ├── Resources/
│ │ └── Testing/Python/
│ ├── utils/ # Shared utilities
│ │ ├── BaseLogic.py # Shared IGTL/scene logic base class
│ │ ├── BaseWidget.py # Shared UI/timer base class
│ │ ├── LoggingUtils.py
│ │ ├── PathSetup.py
│ │ └── URDFParser.py # URDF → Slicer scene loader
│ ├── Config.py # Central configuration
│ └── CMakeLists.txt
│
├── models/ # STL mesh files (robot links, coil, head, etc.)
├── urdfs/ # Robot URDF description
├── mri_data/ # MRI volume (NIfTI format)
├── logs/ # Runtime log files (gitignored)
│
├── .github/workflows/ # CI: linting and tests
├── .pre-commit-config.yaml
├── .flake8
├── .pylintrc
├── pyproject.toml
│
└── third_party/ # External dependencies (submodules)
├── OpenIGTLink/
├── ROS-IGTL-Bridge/
├── Slicer/
└── franka_description/
- Ensure the bridge and Slicer have opposite roles (server vs. client)
- Check that port 18944 is not blocked by a firewall
- Confirm the bridge is running:
roslaunch ros_igtl_bridge bridge.launch
- Add the extension path in Edit → Application Settings → Modules
- Check Slicer's Python console for import errors: View → Python Console
- Verify
urdfs/fr3.urdfand allmodels/fr3/STL files are present - Confirm ROS is publishing joint transforms at the expected node names
See INSTALLATION.md for a full troubleshooting guide.
We welcome contributions. Please see CONTRIBUTING.md for the code style guide (3D Slicer Python conventions), testing requirements, and pull request workflow.
This project is licensed under the MIT License — see LICENSE for details.
- Found a bug? Report an issue
- Wenzhi Bai — wenzhi.bai@manchester.ac.uk (Department of Electrical and Electronic Engineering, University of Manchester)
This project builds upon:
- 3D Slicer — open-source medical imaging platform
- OpenIGTLink — image-guided therapy communication protocol
- ROS-IGTL-Bridge — bidirectional ROS–OpenIGTLink bridge
- Franka Description — Franka Research 3 robot description files