AffordGrasp: In-Context Affordance Reasoning for Open-Vocabulary Task-Oriented Grasping in Clutter

Yingbo Tang1,3      Shuaike Zhang2,3      Xiaoshuai Hao3      Pengwei Wang3
Jianlong Wu2      Zhongyuan Wang3      Shanghang Zhang3,4     

1Institute of Automation, Chinese Academy of Sciences    2Harbin Institute of Technology (Shenzhen)    3Beijing Academy of Artificial Intelligence    4State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University

Abstract

Inferring the affordance of an object and grasping it in a task-oriented manner is crucial for robots to successfully complete manipulation tasks. Affordance indicates where and how to grasp an object by taking its functionality into account, serving as the foundation for effective task-oriented grasping. However, current task-oriented methods often depend on extensive training data that is confined to specific tasks and objects, making it difficult to generalize to novel objects and complex scenes. In this paper, we introduce AffordGrasp, a novel open-vocabulary grasping framework that leverages the reasoning capabilities of vision-language models (VLMs) for in-context affordance reasoning. Unlike existing methods that rely on explicit task and object specifications, our approach infers tasks directly from implicit user instructions, enabling more intuitive and seamless human-robot interaction in everyday scenarios. Building on the reasoning outcomes, our framework identifies task-relevant objects and grounds their part-level affordances using a visual grounding module. This allows us to generate task-oriented grasp poses precisely within the affordance regions of the object, ensuring both functional and context-aware robotic manipulation. Extensive experiments demonstrate that AffordGrasp achieves state-of-the-art performance in both simulation and real-world scenarios, highlighting the effectiveness of our method. We believe our approach advances robotic manipulation techniques and contributes to the broader field of embodied AI.


Framework

pipeline image

Overall Framework of AffordGrasp. The framework processes user instructions and RGB-D scene observations to achieve open-vocabulary task-oriented grasping in clutter. We leverage GPT-4o for in-context affordance reasoning, decomposing the process into three steps: (1) Extracting the task goal and functional requirements from implicit user instructions (e.g., "I want to scoop something"). (2) Identifying the most task-relevant object in the RGB image (e.g., a wooden spoon). (3) Decomposing the object into functional parts and selecting the optimal graspable part (e.g., the handle) based on its affordances. Based on the reasoning results, a visual affordance grounding module grounds the inferred object and part affordances into pixel-level masks. With the affordance mask and RGB-D images, we employ AnyGrasp to generate task-oriented 6D grasp poses on the target part.


RealWorld Experiment Results


COMPARISON RESULTS OF DIFFERENT METHODS IN REAL-WORLD EXPERIMENTS

pipeline image

pipeline image

The examples in real-world experiments, where the visualization of affordance grounding and task-oriented grasp pose generation are provided.


RealWorld Experiment Videos







Simulation Experiment Results


SIMULATION RESULTS OF DIFFERENT METHODS GRASPING SINGLE OBJECT

pipeline image

SIMULATION RESULTS OF DIFFERENT METHODS GRASPING IN CLUTTER

pipeline image

pipeline image


The cases of cluttered grasping in simulation, where the affordances of target object are labeled with red stars.


Simulation Experiment Videos