[IROS 2025 Award Finalist] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
-
Updated
Dec 16, 2025 - Python
[IROS 2025 Award Finalist] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.
InternRobotics' open platform for building generalized navigation foundation models.
[AAAI 2026] OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model
Official code of Motus: A Unified Latent Action World Model
The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"
OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation
InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]
🔥 The first open-sourced diffusion vision-langauge-action model.
Open & Reproducible Research for Tracking VLAs
A collection of vision-language-action model post-training methods.
🔥This is a curated list of "A survey on Efficient Vision-Language Action Models" research. We will continue to maintain and update the repository, so follow us to keep up with the latest developments!!!
A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.
NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Official implementation of ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver.
WAM-Flow: Parallel Coarse-to-Fine Motion Planning via Discrete Flow Matching for Autonomous Driving
WAM-Diff: A Masked Diffusion VLA Framework with MoE and Online Reinforcement Learning for Autonomous Driving
Add a description, image, and links to the vision-language-action-model topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-action-model topic, visit your repo's landing page and select "manage topics."