[Project Page] [Paper]
- Ubuntu (tested with Ubuntu 22.04 LTS)
- Miniconda (tested with 23.5 but all versions should work)
- Python 3.9
If miniconda is not installed run the following for a quick Installation. Note: the script assumes you use bash.
# installing miniconda
mkdir -p ~/.miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-py39_23.5.2-0-Linux-x86_64.sh -O ~/.miniconda3/miniconda.sh
bash ~/.miniconda3/miniconda.sh -b -u -p ~/.miniconda3
rm -rf ~/.miniconda3/miniconda.sh
~/.miniconda3/bin/conda init bash
source .bashrc- Create a
conda environment:
conda create -n cathsim python=3.9
conda activate cathsim- Install the environment:
git clone git@github.com:robotvision-ai/cathsim
cd cathsim
pip install -e .A quick way to have the enviromnent run with gym is to make use of the make_dm_env function and then wrap the resulting environment into a DMEnvToGymWrapper resulting in a gym.Env.
from cathsim.utils import make_dm_env
from cathsim.wrappers import DMEnvToGymWrapper
env = make_dm_env(
dense_reward=True,
success_reward=10.0,
delta=0.004,
use_pixels=False,
use_segment=False,
image_size=64,
phantom="phantom3",
target="bca",
)
env = DMEnvToGymWrapper(env)
obs = env.reset()
for _ in range(1):
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
for obs_key in obs:
print(obs_key, obs[obs_key].shape)
print(reward)
print(done)
for info_key in info:
print(info_key, info[info_key])For a list of the environment libraries at the current time, see the accompanying environment.yml
In order to train the models available run:
bash ./scripts/train.shThe script will create a results directory on the cwd. The script saves the data in <trial-name>/<phantom>/<target>/<model> format. Each model has three subfolders eval, models and logs, where the evaluation data contains the Trajectory data resulting from the evaluation of the policy, the models contains the pytorch models and the logs contains the tensorboard logs.
For a quick visualisation of the environment run:
run_envYou will now see the guidewire and the aorta along with the two sites that represent the targets. You can interact with the environment using the keyboard arrows.
You can use a custom aorta by making use of V-HACD convex decomposition. To do so, you can use stl2mjcf, available here. You can quickly install the tool with:
pip install git+git@github.com:tudorjnu/stl2mjcf.gitAfter the installation, you can use stl2mjcf --help to see the available commands. The resultant files can be then added to cathsim/assets. The xml will go in that folder and the resultant meshes folder will go in cathsim/assets/meshes/.
Note: You will probably have to change the parameters of V-HACD for the best results.
- Code refactoring
- Add fluid simulation
- Add VR/AR interface through Unity
- Implement multiple aortic models
- Add guidewire representation
- Tudor Jianu
- Baoru Huang
- Jingxuan Kang
- Tuan Van Vo
- Mohamed E. M. K. Abdelaziz
- Minh Nhat Vu
- Sebastiano Fichera
- Chun-Yi Lee
- Olatunji Mumini Omisore
- Pierre Berthet-Rayne
- Ferdinando Rodriguez y Baena
- Anh Nguyen
Please review our Terms of Use before using this project.
Please feel free to copy, distribute, display, perform or remix our work but for non-commercial porposes only.
If you find our paper useful in your research, please consider citing:
@article{jianu2022cathsim,
title={CathSim: An Open-source Simulator for Endovascular Intervention},
author={Jianu, Tudor and Huang, Baoru and Abdelaziz, Mohamed EMK and Vu, Minh Nhat and Fichera, Sebastiano and Lee, Chun-Yi and Berthet-Rayne, Pierre and Nguyen, Anh and others},
journal={arXiv preprint arXiv:2208.01455},
year={2022}
