Integrating planning perception and action for informed object search

Luis J. Manso*, Marco A. Gutierrez, Pablo Bustos, Pilar Bachiller

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.
Original languageEnglish
Pages (from-to)285–296
Number of pages12
JournalCognitive Processing
Volume19
DOIs
Publication statusPublished - 14 Aug 2017

Fingerprint

Dive into the research topics of 'Integrating planning perception and action for informed object search'. Together they form a unique fingerprint.

Cite this