The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
-
Updated
Mar 28, 2025 - Python
The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model
[Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]
ProRobo3D Benchmark to be release...
Website of Paper: VLA Model-Expert Collaboration for Bi-directional Manipulation Learning
Add a description, image, and links to the vision-language-action-model topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-action-model topic, visit your repo's landing page and select "manage topics."