Skip to content

Latest commit

 

History

History
63 lines (44 loc) · 1.95 KB

README.md

File metadata and controls

63 lines (44 loc) · 1.95 KB

Kinova Robotic Arm - Embodied AI Experiments

User interface and Simplified control flow

  • Technical Report: [coming soon]
  • Demonstration: [coming soon]
  • Detailed Control Flow: [coming soon]

Requirements

  • Decent graphic card, or Apple M-series device
  • Minimum 8GB RAM
  • 10+ GB free storage
  • Conda | docs
  • Python 3.10.14
  • Phi-3-mini-4k-instruct-q4 language model | 🤗 Page
  • clipseg-rd64-refined (auto downloaded) | 🤗 Page

Environment Setup

  1. Create conda environment

    conda create -n kinovaAi python=3.10.14
  2. Download the LM and move it to llm_models

  3. Install packages:

    pip install -r requirements.txt

Embodied AI Inference

  1. Activate conda environment.

    conda activate kinovaAi
  2. Set the IP address of host server (i.e. running ROS, Kortex API & connected to arm) in inference2_clipseg.py.

    Example: HOST_IP = '192.168.1.100'

  3. Start the server first by following the Remote AI Inference instructions on rishiktiwari/rishik_ros_kortex

  4. Run the following command to connect to server and begin inference script:

    python3 embodied_ai/inference2_clipseg.py

    inference1_gdino is incomplete and not recommended.

    Only actions translating to pick or pick_place are supported. English prompt works the best!

Device connection overview

Device connection overview