Skip to content

giuschio/agent_aware_affordances

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning Agent-Aware Affordances for Closed-Loop Interaction with Articulated Objects

alt text Real-world experiment of opening an oven in two motions. a) & c): Estimated actionability map where the red cross represents the selected interaction point. b) The first interaction pose becomes unfavorable, therefore an update is triggered. d): Successful task completion after the second interaction.

Introduction

Interactions with articulated objects are a challenging but important task for mobile robots. To tackle this challenge, we propose a novel closed-loop control pipeline, which integrates manipulation priors from affordance estimation with sampling-based whole-body control. We introduce the concept of agent-aware affordances which fully reflect the agent's capabilities and embodiment and we show that they outperform their state-of-the-art counterparts which are only conditioned on the end-effector geometry. Additionally, closed-loop affordance inference is found to allow the agent to divide a task into multiple non-continuous motions and recover from failure and unexpected states. Finally, the pipeline is able to perform long-horizon mobile manipulation tasks, i.e. opening and closing an oven, in the real world with high success rates (opening: 71%, closing: 72%).

About the paper

Authors: Giulio Schiavi* (github, linkedin), Paula Wulkop*, Giuseppe Rizzi, Lionel Ott, Roland Siegwart, Jen Jen Chung1,
from the Autonomous Systems Lab, ETH Zurich, Switzerland.
* Equal contribution.
1 Also with the School of ITEE, The University of Queensland, Australia.

Arxiv Version: https://arxiv.org/abs/2209.05802

Project Page: https://paulawulkop.github.io/agent_aware_affordances

Project Video: https://www.youtube.com/watch?v=A_v5GPFaLwU

The Code

The code in this repository is available under an MIT license, and can be used to train and test our pipeline. We additionally provide trained network checkpoints and some demos. Please refer to the documentation on running the code for additional details. Please note that in order to run this code a RAISIM installation is required. The installation requires a license, which can be requested free of charge for academic purposes.

Citations

If you use our code in your research, please cite our paper as:

@misc{schiavi2022learning,
  title={Learning Agent-Aware Affordances for Closed-Loop Interaction with Articulated Objects},
  author={Giulio Schiavi and Paula Wulkop and Giuseppe Rizzi and Lionel Ott and Roland Siegwart and Jen Jen Chung},
  year={2022},
}

Acknowledgements

This work was inspired by the Where2Act framework and uses some code from their implementation, which is available here.

Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017008 (Harmony).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published