ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos

Zerui Chen1
Shizhe Chen1
Cordelia Schmid1
Ivan Laptev2
1Inria, École normale supérieure, CNRS, PSL Research University
2 Mohamed bin Zayed University of Artificial Intelligence
Under Review

Abstract

In this work, we aim to learn a unified vision-based policy for a multi-fingered robot hand to manipulate different objects in diverse poses. Though prior work has demonstrated that human videos can benefit policy learning, performance improvement has been limited by physically implausible trajectories extracted from videos. Moreover, reliance on privileged object information such as ground-truth object states further limits the applicability in realistic scenarios. To address these limitations, we propose a new framework ViViDex to improve vision-based policy learning from human videos. It first uses reinforcement learning with trajectory guided rewards to train state-based policies for each video, obtaining both visually natural and physically plausible trajectories from the video. We then rollout successful episodes from state-based policies and train a unified visual policy without using any privileged information. A coordinate transformation method is proposed to significantly boost the performance. We evaluate our method on three dexterous manipulation tasks and demonstrate a large improvement over state-of-the-art algorithms.


Method



Our method. The overall framework of our method for learning dexterous manipulation skills from human videos. It consists of three modules: the extraction of the reference trajectory from human videos, the trajectory-guided state-based policy learning using RL, and the vision-based policy learning using behavior cloning.



Introduction Video and Qualitative Results



BibTeX

@InProceedings{chen2024vividex,
author       = {Chen, Zerui and Chen, Shizhe and Schmid, Cordelia and Laptev, Ivan},
title        = {{ViViDex}: Learning Vision-based Dexterous Manipulation from Human Videos},
booktitle    = {arXiv:2404.15709},
year         = {2024},
}

Acknowledgements

This work was granted access to the HPC resources of IDRIS under the allocation AD011013147 made by GENCI. It was funded in part by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute) and the ANR project VideoPredict (ANR-21-FAI1-0002-01).