Skip to content

phospho-app/phosphobot

Repository files navigation

phosphobot

phosphobot is a community-driven platform that enables you to train and use VLA (vision language action models) to control real robots.

phosphobot Python package on PyPi Y Combinator W24 phospho discord

Overview

  • 🕹️ Control your robot with the keyboard, a leader arm, a Meta Quest headset or via API
  • 📹 Teleoperate robots to record datasets in LeRobot dataset format
  • 🤖 Train action models like ACT, gr00t n1 or Pi0
  • 🔥 Use action models to control robots
  • 💻 Runs on macOS, Linux and Windows
  • 🦾 Compatible with the SO-100, SO-101, WX-250 and AgileX Piper
  • 🔧 Extend it with your own robots and cameras

Getting started

1. Get a SO-100 robot

Purchase your Phospho starter pack at robots.phospho.ai or build your own robot following the instructions in the SO-100 repo.

2. Install the phosphobot server

# Install it this way
curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash
# Start it this way
phosphobot run
# Upgrade it with brew or with apt
# sudo apt update && sudo apt install phosphobot
# brew update && brew upgrade phosphobot

3. Make your robot move for the first time!

Go to the webapp at YOUR_SERVER_ADDRESS:YOUR_SERVER_PORT (default is localhost:80) and click control.

You will be able to control your robot with:

  • the keyboard
  • a leader arm
  • a Meta Quest if you have the phospho teleop app

4. Record a dataset

Record a 40 episodes dataset of the task you want the robot to learn.

Check out the docs for more details.

5. Train an action model

To train an action model on the dataset you recorded, you can:

  • train a model directly from the phosphobot webapp (see this tutorial)
  • use your own machine (see this tutorial to finetune gr00t n1)

In both cases, you will have a trained model exported to huggingface.

To learn more about training action models for robotics, check out the docs.

6. Use the model to control your robot

Now that you have a trained model hosted on huggingface, you can use it to control your robot either:

  • directly from the webapp
  • from your own code using the phosphobot python package (see this script for an example)

Learn more in the docs.

Congrats! You just trained and used your first action model on a real robot.

Examples

The examples/ directory is the quickest way to see the toolkit in action. Check it out! Proud of what you build? Share it with the community by opening a PR to add it to the examples/ directory.

Advanced Usage

You can directly call the phosphobot server from your own code, using the HTTP API and websocket API.

Go to the interactive docs of the API to use it interactively and learn more about it. It is available at YOUR_SERVER_ADDRESS:YOUR_SERVER_PORT/docs. By default, it is available at localhost:80/docs.

We release new versions very often, so make sure to check the API docs for the latest features and changes.

Supported Robots

We currently support the following robots:

See this README for more details on how to add support for a new robots or open an issue.

Join the Community

Connect with other developers and share your experience in our Discord community

Install from source

  1. Download and install uv and npm. Best compatibility is with python>=3.10 and node>=20.

  2. Clone github

git clone https://github.com/phospho-app/phosphobot.git
  1. On MacOS and Windows, to build the frontend and start the backend, run:
make

On Windows, the Makefile don't work. You can run the commands directly.

cd ./dashboard && (npm i && npm run build && mkdir -p ../phosphobot/resources/dist/ && cp -r ./dist/* ../phosphobot/resources/dist/)
cd phosphobot && uv run --python 3.10 phosphobot run --simulation=headless
  1. Go to localhost:80 in your browser to see the dashboard or get the server infos with:
curl -X 'GET' 'http://localhost/status' -H 'accept: application/json'

Note: some features, such as connection to the phospho cloud, AI training, and AI control, are not available when installing from source.

Contributing

We welcome contributions! Some of the ways you can contribute:

  • Add support for new AI models
  • Add support for new teleoperation controllers
  • Add support for new robots and sensors
  • Add something you built to the examples
  • Improve the dataset collection and manipulation
  • Improve the documentation and tutorials
  • Improve code quality and refacto
  • Improve the performance of the app
  • Fix issues you faced

Support

License

MIT License


Made with 💚 by the Phospho community