Visual-language grounding aims to establish semantic correspondences between natural language and visual entities, enabling models to accurately identify and localize target objects based on textual instructions. Existing VLG approaches focus on coarse-grained, object-level localization, while traditional robotic grasping methods rely predominantly on geometric cues and lack language guidance, which limits their applicability in language-driven manipulation scenarios. To address these limitations, we propose the RealVLG framework, which integrates the RealVLG-11B dataset and the RealVLG-R1 model to unify real-world visual-language grounding and grasping tasks. RealVLG-11B dataset provides multi-granularity annotations including bounding boxes, segmentation masks, grasp poses, contact points, and human-verified fine-grained language descriptions, covering approximately 165,000 images, over 800 object instances, 1.3 million segmentation, detection, and language annotations, and roughly 11 billion grasping examples. Building on this dataset, RealVLG-R1 employs Reinforcement Fine-tuning on pretrained large-scale vision-language models to predict bounding boxes, segmentation masks, grasp poses, and contact points in a unified manner given natural language instructions. Experimental results demonstrate that RealVLG supports zero-shot perception and manipulation in real-world unseen environments, establishing a unified semantic-visual multimodal benchmark that provides a comprehensive data and evaluation platform for language-driven robotic perception and grasping policy learning.
We propose RealVLG, a unified framework that integrates the RealVLG-11B dataset and RealVLG-R1 model to enable multigranularity, zero-shot robotic visual-language grounding and grasping in real-world scenarios.
The pipeline integrates automatic language generation, model-based verification, and manual review to generate high-quality multi-granularity visual and language annotations.
Comparison of RealVLG-11B with existing grasping datasets. “–” indicates unknown values.
Unlike the diffusion-generated, low-resolution images and weakly aligned textual and grasp annotations in Grasp-Anything datasets, RealVLG-11B provides high-resolution real-world imagery, instance-level language grounding, and standardized, physically executable grasp labels, enabling more accurate and robust visual–language grasping.
RealVLG-R1 fine-tunes pretrained LVLMs via reward-driven RL using task-specific verifiable rewards, enabling adaptive learning and improved generalization over bounding boxes, segmentation, grasp rectangles, and contact points. It includes two aspects:
All metrics are reported in percentage format.
Training reward/accuracy curves for GRPO, GSPO, and SFT on Contact tasks. Overall, GRPO and GSPO significantly improve SFT through RLVR. GRPO achieves slightly higher accuracy on 3B, while GSPO performs better on 7B and exhibits more stable outputs across training steps.
(a) The 7-DoF Franka Research 3 robot equipped with an eye-in-hand Intel RealSense D435i camera, used for real-world evaluation of RealVLG-R1. (b) The set of 10 test objects used to assess the model’s generalization and manipulation performance.
GraspNet often fails or predicts misaligned grasp poses due to noisy or incomplete point cloud data (e.g., Cup), reflective surfaces, and small or thin objects, such as Marker, Screwdriver, and Razor. In contrast, RealVLG-R1 leverages RGB vision and language instructions to accurately localize the target and generate executable grasp contact points, demonstrating robust and reliable grasping behavior across diverse objects.
LGD struggles to perform language-conditioned grasps in cluttered environments due to limited perceptual resolution, suboptimal language integration, and reliance on unconditional grasp pose predictions.
RealVLG-R1 demonstrates accurate language-conditioned grasping, robust zero-shot performance in cluttered environments, and interpretable predictions of grasp poses.
@inproceedings{li2026realvlg,
title = {RealVLG-R1: A Large-Scale Real-World Visual-Language Grounding Benchmark for Robotic Perception and Manipulation},
author = {Li, Linfei and Zhang, Lin and Shen, Ying},
booktitle = {CVPR},
year = {2026},
}