[IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
-
Updated
Dec 16, 2025 - Python
[IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
A curated list of awesome LLM/VLM/VLA/World Model for Autonomous Driving(LLM4AD) resources (continually updated)
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
仅需Python基础,从0构建自己的具身智能机器人;从0逐步构建VLA/OpenVLA/SmolVLA/Pi0, 深入理解具身智能
Official code of Motus: A Unified Latent Action World Model
A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.
InternRobotics' open platform for building generalized navigation foundation models.
[AAAI 2026] OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model
[ICLR 2026] The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"
InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy
InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation
OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation
Code for kai0, including training, inference and data collection.
[CVPR 2026] WAM-Flow: Parallel Coarse-to-Fine Motion Planning via Discrete Flow Matching for Autonomous Driving
WAM-Diff: A Masked Diffusion VLA Framework with MoE and Online Reinforcement Learning for Autonomous Driving
Official implementation of ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver.
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
The official implementation of "DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation". (arXiv 2601.22153)
[CVPR 2026] Drive-π0 and DriveMoE on End-to-end Autonomous Driving
Add a description, image, and links to the vision-language-action-model topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-action-model topic, visit your repo's landing page and select "manage topics."