Skip to content

A TensorRT and C++ based deployment of ​​FoundationPose, which makes integration lightweight and efficient. Supports Jetson Orin. Adapted from nvidia_isaac_pose_esitimation.

License

Notifications You must be signed in to change notification settings

zz990099/foundationpose_cpp

Repository files navigation

Foundationpose-CPP

About this project

This project is adapted from nvidia-issac-pose-estimation, with simplified dependencies. It enables inference using ONNX models exported from the Python implementation of FoundationPose, making deployment and application highly convenient.

​Notes:​​ This repository only contains the code for the FoundationPose component. The complete 6D pose estimation pipeline also relies on object masks, which can be generated by algorithms like SAM. For reference implementations and optimized inference of MobileSAM and NanoSAM, please visit EasyDeploy.

Update LOG

[2025.04] Decoupled Register and Track processes; Output poses under mesh coordinates, providing mesh_loader interfaces for external extension. Related PR.

[2025.03] Aligned rendering process with the original Python implementation, supporting rendering without texture input. Related PR.

[2025.03] Added support for Jetson Orin platform with one-click Docker environment setup. See link.

Features

  1. Removed complex environment setup and dependency issues from the original project, enabling easy integration with other projects.

  2. Implemented encapsulation of the FoundationPose algorithm, ​​supporting dynamic-sized image input​​ for flexible usage. Provided tutorial scripts for generating 3D object models using BundleSDF.

  3. 🔥 Supports Jetson Orin development boards (Orin-NX-16GB).

Demo

Test results on public mustard dataset:

1
foundationpose(fp16) Register test result
1
foundationpose(fp16) Track test result

Performance on nvidia-4060-8G and i5-12600kf:

nvidia-4060-8G fps cpu gpu
foundationpose(fp16)-Register 2.8 100% 6.5GB
foundationpose(fp16)-Track 220 100% 5.8GB

Performance on jetson-orin-nx-16GB:

jetson-orin-nx-16GB fps cpu mem_total
foundationpose(fp16)-Register 0.6 15% 5.6GB(5.5GB on gpu)
foundationpose(fp16)-Track 100 60% 5.1GB(5.0GB on gpu)

Usage

Environment Setup

  1. Clone the repository:

    git clone git@github.com:zz990099/foundationpose_cpp.git
    cd foundationpose_cpp
    git submodule init
    git submodule update
  2. Build using Docker:

    cd ${foundationpose_cpp}
    bash easy_deploy_tool/docker/easy_deploy_startup.sh
    # Select `jetson` -> `trt10_u2204`/`trt8_u2204` (`trt8_u2004` not supported)
    bash easy_deploy_tool/docker/into_docker.sh

Model Conversion

  1. Download ONNX models from google drive and place them in /workspace/models/.

  2. Convert models:

    cd /workspace
    bash tools/cvt_onnx2trt.bash

Build Project

  1. Compile the project:
  cd /workspace
  mkdir build && cd build
  cmake -DENABLE_TENSORRT=ON ..
  make -j

Run Demo

Use public Dataset Demo (mustard)

  1. Download and extract the dataset to /workspace/test_data/ from here.

  2. Run tests:

    cd /workspace/build
    ./bin/simple_tests --gtest_filter=foundationpose_test.test

Custom 3D Model Generation

  1. Refer to Generating 3D Models with BundleSDF.

  2. Modify paths in /workspace/simple_tests/src/test_foundationpose.cpp for your data and rebuild.

  3. Run tests:

    cd /workspace/build
    ./bin/simple_tests --gtest_filter=foundationpose_test.test
  4. Results for Register and Track processes will be saved in /workspace/test_data/.

References

For any questions, feel free to raise a issue or contact 771647586@qq.com.

About

A TensorRT and C++ based deployment of ​​FoundationPose, which makes integration lightweight and efficient. Supports Jetson Orin. Adapted from nvidia_isaac_pose_esitimation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published