RealVLG-R1: A Large-Scale Real-World Visual-Language Grounding Benchmark for Robotic Perception and Manipulation

1Tongji University   
*Corresponding author   
CVPR 2026
CVPR 2026

We propose RealVLG, a unified framework that integrates the RealVLG-11B dataset and RealVLG-R1 model to enable multigranularity, zero-shot robotic visual-language grounding and grasping in real-world scenarios.

Abstract

Visual-language grounding aims to establish semantic correspondences between natural language and visual entities, enabling models to accurately identify and localize target objects based on textual instructions. Existing VLG approaches focus on coarse-grained, object-level localization, while traditional robotic grasping methods rely predominantly on geometric cues and lack language guidance, which limits their applicability in language-driven manipulation scenarios. To address these limitations, we propose the RealVLG framework, which integrates the RealVLG-11B dataset and the RealVLG-R1 model to unify real-world visual-language grounding and grasping tasks. RealVLG-11B dataset provides multi-granularity annotations including bounding boxes, segmentation masks, grasp poses, contact points, and human-verified fine-grained language descriptions, covering approximately 165,000 images, over 800 object instances, 1.3 million segmentation, detection, and language annotations, and roughly 11 billion grasping examples. Building on this dataset, RealVLG-R1 employs Reinforcement Fine-tuning on pretrained large-scale vision-language models to predict bounding boxes, segmentation masks, grasp poses, and contact points in a unified manner given natural language instructions. Experimental results demonstrate that RealVLG supports zero-shot perception and manipulation in real-world unseen environments, establishing a unified semantic-visual multimodal benchmark that provides a comprehensive data and evaluation platform for language-driven robotic perception and grasping policy learning.

Overview

Overview

We propose RealVLG, a unified framework that integrates the RealVLG-11B dataset and RealVLG-R1 model to enable multigranularity, zero-shot robotic visual-language grounding and grasping in real-world scenarios.

  • RealVLG-11B Dataset: The largest real-world grounding and grasping dataset with multi-granularity annotations from semantic localization to grasp-level understanding.
  • RealVLG-R1 Model: A unified model trained via Reinforcement Learning Fine-tuning for zero-shot language-driven grounding and grasping.
  • RealVLG Benchmark: A unified visual-language grounding and grasping benchmark for robotic perception and grasping policy learning.

RealVLG-11B Dataset

Unified Data Annotation

Annotation

The pipeline integrates automatic language generation, model-based verification, and manual review to generate high-quality multi-granularity visual and language annotations.

Unified Vision-Language Grounding and Grasping Dataset

Dataset Overview

Comparison of RealVLG-11B with existing grasping datasets. “–” indicates unknown values.

Qualitative Comparison of Data Quality

Dataset Compare

Unlike the diffusion-generated, low-resolution images and weakly aligned textual and grasp annotations in Grasp-Anything datasets, RealVLG-11B provides high-resolution real-world imagery, instance-level language grounding, and standardized, physically executable grasp labels, enabling more accurate and robust visual–language grasping.

Framework of RealVLG-R1

RealVLG-R1 Model

RealVLG-R1 fine-tunes pretrained LVLMs via reward-driven RL using task-specific verifiable rewards, enabling adaptive learning and improved generalization over bounding boxes, segmentation, grasp rectangles, and contact points. It includes two aspects:

  • Policy Optimization with Verifiable Rewards: GRPO and GSPO.
  • Task-Specific Pipelines and Verifiable Rewards: Bbox, Segmentation, 4-DoF Grasp Pose, Grasp Contact Points.

RealVLG Benchmark

Comprehensive Results

RealVLG Benchmark

All metrics are reported in percentage format.

Training Performance

RealVLG Training

Training reward/accuracy curves for GRPO, GSPO, and SFT on Contact tasks. Overall, GRPO and GSPO significantly improve SFT through RLVR. GRPO achieves slightly higher accuracy on 3B, while GSPO performs better on 7B and exhibits more stable outputs across training steps.

Real-World Visual-Language Grasping Experiments

Real-world Experimental Setup

Setup

(a) The 7-DoF Franka Research 3 robot equipped with an eye-in-hand Intel RealSense D435i camera, used for real-world evaluation of RealVLG-R1. (b) The set of 10 test objects used to assess the model’s generalization and manipulation performance.

Single Object Grasping

Single

GraspNet often fails or predicts misaligned grasp poses due to noisy or incomplete point cloud data (e.g., Cup), reflective surfaces, and small or thin objects, such as Marker, Screwdriver, and Razor. In contrast, RealVLG-R1 leverages RGB vision and language instructions to accurately localize the target and generate executable grasp contact points, demonstrating robust and reliable grasping behavior across diverse objects.

Clutter Object Grasping

Clutter LGD

LGD struggles to perform language-conditioned grasps in cluttered environments due to limited perceptual resolution, suboptimal language integration, and reliance on unconditional grasp pose predictions.

Clutter Ours

RealVLG-R1 demonstrates accurate language-conditioned grasping, robust zero-shot performance in cluttered environments, and interpretable predictions of grasp poses.

BibTeX

@inproceedings{li2026realvlg,
  title     = {RealVLG-R1: A Large-Scale Real-World Visual-Language Grounding Benchmark for Robotic Perception and Manipulation},
  author    = {Li, Linfei and Zhang, Lin and Shen, Ying},
  booktitle = {CVPR},
  year      = {2026},
}