🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.
-
Updated
Jun 23, 2025 - Python
🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.
🛠️ Build and explore Compiladores 1 materials for Software Engineering, including tasks, discussions, and course resources for the 2025 semester.
Add a description, image, and links to the visual-language-action-model topic page so that developers can more easily learn about it.
To associate your repository with the visual-language-action-model topic, visit your repo's landing page and select "manage topics."