Indoor scene perception for object detection and manipulation

Luis J. Manso, Pablo Bustos, Pilar Bachiller, Jose Franco-Campos

Research output: Contribution to journalConference articlepeer-review


Social robots are designed to interact and share their environments with humans while performing daily activities. They need to build and maintain rich representations of the space and objects around them in order to achieve their goals. In this paper, we propose a framework for building model-based representations of the space surrounding the robots and the objects nearby. The approach considers active perception as the phenomena resulting from controlled interactions between different model-fitting algorithms and a grammar-based generative mechanism called “Grammars for Active Perception” (GAP). The production rules of these grammars describe how world models can be built and modified, and are associated with the behaviors needed by the model-fitting algorithms in order to succeed. Such descriptions can be used to compute the required actions to build consistent models of the environment. The resulting behavior seizes the a priori knowledge available to the robot, not only to improve the modeling process, but also to guide exploration and visual attention. The models generated using these grammars are attributed graphs that can contain geometric and other semantic properties
Original languageEnglish
Pages (from-to)55-56
Number of pages2
JournalCognitive Processing
Issue number1
Publication statusPublished - 16 Aug 2012


Dive into the research topics of 'Indoor scene perception for object detection and manipulation'. Together they form a unique fingerprint.

Cite this