Skip to content

gakakulicc/RobustVLA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RobustVLA: On Robustness of Vision-Language-Action Model against Multi-Modal Perturbations

This repository contains the code and demonstration videos for RobustVLA, a framework that enhances the robustness of Vision-Language-Action (VLA) models against multi-modal perturbations.

Abstract

In Vision-Language-Action (VLA) models, robustness to real-world perturbations is critical for deployment. Existing methods target simple visual disturbances, overlooking the broader multi-modal perturbations that arise in actions, instructions, environments, and observations. Here, we first evaluate the robustness of mainstream VLAs under 17 perturbations across four modalities. We find (1) actions as the most fragile modality, (2) existing visual-robust VLAs do not gain robustness in other modalities, and (3) $π_0$ demonstrates superior robustness. To build multi-modal robust VLAs, we propose RobustVLA against perturbations in VLA inputs and outputs. For output robustness, we perform offline robust optimization against worst-case action noise that maximizes mismatch in flow matching objective. This can be seen as adversarial training, label smoothing, and outlier penalization. For input robustness, we enforce consistent actions across input variations that preserve task semantics. To account for multiple perturbations, we formulate robustness as a multi-armed bandit problem and apply an upper confidence bound algorithm to automatically identify the most harmful noise. Experiments on LIBERO demonstrate our RobustVLA delivers absolute gains over baselines of 12.6% on the $π_0$ backbone and 10.4% on the OpenVLA backbone across all 17 perturbations, achieving 50.6× faster inference than existing visual-robust BYOVLA that requires external LLMs, and a 10.4% gain under mixed perturbations. On the real-world FR5 robot, under four types of multimodal perturbations, RobustVLA shows strong low-data performance, outperforming $π_0$ by $65.6%$ success rate with 25 demonstrations. Even with abundant demos, our method still outperforms $π_0$ by 30% success rate.

🎥 Demonstration Videos

Action

RobustVLA

Robust Action Demo

$\pi_0$

Pi0 Action Demo


Environment

RobustVLA

Robust Environment Demo

$\pi_0$

Pi0 Environment Demo


Instruction

RobustVLA

Robust Instruction Demo

$\pi_0$

Pi0 Instruction Demo


Observation

RobustVLA

Robust Observation Demo

$\pi_0$

Pi0 Observation Demo

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors