Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis

Diego R. Faria, Pedro Trindade, Jorge Lobo, Jorge Dias

Research output: Contribution to journalArticle

Abstract

Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform—a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.
Original languageEnglish
Pages (from-to)794-817
Number of pages24
JournalRobotics and Autonomous Systems
Volume62
Issue number6
DOIs
Publication statusPublished - Jun 2014

Fingerprint

End effectors
Demonstrations
Robots
Decomposition
Robotics
Sensors
Uncertainty

Cite this

Faria, D. R., Trindade, P., Lobo, J., & Dias, J. (2014). Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis. Robotics and Autonomous Systems, 62(6), 794-817. https://doi.org/10.1016/j.robot.2014.02.003
Faria, Diego R. ; Trindade, Pedro ; Lobo, Jorge ; Dias, Jorge. / Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis. In: Robotics and Autonomous Systems. 2014 ; Vol. 62, No. 6. pp. 794-817.
@article{011cce76159c477cb041402eb30e3fd2,
title = "Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis",
abstract = "Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform—a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.",
author = "Faria, {Diego R.} and Pedro Trindade and Jorge Lobo and Jorge Dias",
year = "2014",
month = "6",
doi = "https://doi.org/10.1016/j.robot.2014.02.003",
language = "English",
volume = "62",
pages = "794--817",
number = "6",

}

Faria, DR, Trindade, P, Lobo, J & Dias, J 2014, 'Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis', Robotics and Autonomous Systems, vol. 62, no. 6, pp. 794-817. https://doi.org/10.1016/j.robot.2014.02.003

Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis. / Faria, Diego R.; Trindade, Pedro; Lobo, Jorge; Dias, Jorge.

In: Robotics and Autonomous Systems, Vol. 62, No. 6, 06.2014, p. 794-817.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Knowledge-based Reasoning from Human Grasp Demonstrations for Robot Grasp Synthesis

AU - Faria, Diego R.

AU - Trindade, Pedro

AU - Lobo, Jorge

AU - Dias, Jorge

PY - 2014/6

Y1 - 2014/6

N2 - Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform—a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.

AB - Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform—a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.

U2 - https://doi.org/10.1016/j.robot.2014.02.003

DO - https://doi.org/10.1016/j.robot.2014.02.003

M3 - Article

VL - 62

SP - 794

EP - 817

IS - 6

ER -