All we need is gripper pose, rgb image, depth image, camera extrinsic and intrisinc matrix.
For fixed views, we first draw a circle for the EE position, and then add a shooting line from EE to point outward according the orientation.
For wrist views, we project the EE postion toward the wrist image inside, and draw the different reticles to show the target focus.
It costs <1ms for each step.
After initializing a virtual environment
pip install -e src
Download sample data
git lfs install
git clone git@hf.co:datasets/Yinpei/reticle_sample_data data
Unzip all data
python script/unzip_all.py
draw reticle for libero or rlbench
export PYTHONPATH=$PYTHONPATH:<path_to_aimbot>/src
python script/draw_crosshair_libero.py --task-suite-name <libero_10/libero_goal/libero_object/libero_spatial> --config <config_str> # for LIBERO
python script/draw_crosshair_rlbench.py --task all --config <config_str> # for RLbench
For fixed view, use ReticleBuilder.render_on_fix_camera(), for wrist view use ReticleBuilder.render_on_wst_camera(). Both needs:
- rgb_image: np.ndarray, shape (H, W, 3) dtype uint8
- depth_image: np.ndarray, shape (H, W) dtype float, with the real depth value in meter
- camera_extrinsic_matrix: np.ndarray, shape (4, 4)
- camera_intrinsic_matrix: np.ndarray, shape (3, 3)
- gripper_pos: np.ndarray, shape (3,), current EE position
- gripper_quat: np.ndarray, shape (4,), in (x,y,z,w) format, current EE orientation
- gripper_open: bool
- image_height: int
- image_width: int
- tolerance: int, the tolerance steps about being occluded by any object when projecting from the EE center to point outside according to the orientation.
Note: GIF may change colors during displaying
ShootingLine & CrossHair (default)
|
ShootingLine & CrossHair (plain color)
|
ShootingLine & CrossHair (new color)
|
ShootingLine & CrossHair (dashed shootline)
|
ShootingLine & CrossHair (no dynamic adjustion)
|
ShootingLine & CrossHair (samller size)
|
ShootingLine & Dot reticle
|
ShootingLine & Bullseye reticle
|
ShootingLine & star reticle
|
We provide an example code repo to trianing/evalute with AimBot at openpi0-aimbot. Check it out!
This work was supported in part by NSF SES-2128623, NSF CAREER #2337870, and NSF NRI#2220876. We would like to thank Dr. Xinyi Wang for insightful discussion. We would also like to thank Lambda Labs for providing helpful GH200 computing resources.
@article{aimbot,
title={AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies},
author={Dai, Yinpei and Lee, Jayjun and et al},
journal={CoRL},
year={2025},
}









