Abstract
Nowadays, robots need to be able to interact with humans and objects in a flexible way and should be able to share the same knowledge (physical and social) of the human counterpart. Therefore, there is a need for a framework for expressing and sharing knowledge in a meaningful way by building the world model. In this paper, we propose a new framework for human-robot interaction using ontologies as powerful way of representing information which promote the sharing of meaningful knowledge between different objects. Furthermore, ontologies are powerful notions able to conceptualise the world in which the object such as Robot is situated. In this research, ontology is considered as improved solution to the grounding problem and enables interoperability between human and robot. The proposed system has been evaluated on a large number of test cases; results were very promising and support the implementation of the solution.
Original language | English |
---|---|
Pages (from-to) | 1837-1847 |
Journal | Robotics and Autonomous Systems |
Volume | 62 |
Issue number | 12 |
Early online date | 17 Jul 2014 |
DOIs | |
Publication status | Published - Dec 2014 |
Bibliographical note
Funding Information:With this research, we presented a cognitive framework that includes social and self-driven learning for possible novel form of human–robot collaboration. We pointed out important characteristics of such a framework for Human–Robot Collaboration. First, the ontology helps to deal with a structured knowledge and it facilitates the interoperability between human and robot. Second, it is important to study how the robot can deal with the social knowledge about the objects present in the working area or at home. Finally, the robot can spatially locate them in relation to itself or the human partner which makes the solution even more attractive for the robotics community. This research can help the process of introducing robots in the everyday life of people. Robots can retail their space in hospitals, in working environments, at home, tightly collaborating with people. They can become a helpful resource and not slaves of repetitive tasks. Francesco Rea got his bachelor degree in Software Engineering from the Universita di Bergamo Italy. He got the M.Sc. degree in Robotics and Automation with distinction from the Salford University, England. Since 2009, at the Istituto Italiano di Tecnologia, he has promoted the research in the field of humanoid robotics. He got his Ph.D. degree in 2013 where he exploited the aspects of perception and cognition in active vision. In his thesis from perception to cognition: a quest for effective active vision in human–robot interaction , starting from different perceptive mechanisms (stereo neuromorphic sensor, frame-based colour stereo vision), he addressed different biological models of cognitive development (attention system, prediction, learning, oculomotor controls). He has recently been involved in the study and simulation of the “Consequences of loading on postural–focal dynamics” in humans/humanoid in collaboration with US Army Natick Soldier RDEC. Since 2009, he investigated in European projects (eMorph, Darwin) different topics in the field of biological modelling of human brain functions and testing it on the humanoid robot iCub. His research is now continuing in exploiting the models of child development and validation of biological model on the humanoid platform iCub. Samia Nefti-Meziani received the M.Sc. degree in electrical engineering, the D.E.A. degree in industrial informatics, and the Ph.D. degree in robotics and artificial intelligence from the University of Paris XII, Paris, France, in 1992, 1994, and 1998, respectively. In November 1999, she joined the Liverpool University, Liverpool, UK, as a Senior Research Fellow engaged with the European Research Project Occupational Therapy Internet School. Afterwards, she was involved in several projects with the European and UK Engineering and Physical Sciences Research Council, where she was concerned mainly with model-based predictive control, modelling, and swarm optimisation and decision making. She is currently an Associate Professor of computational intelligence and robotics with the School of Computing Science and Engineering, The University of Salford, Greater Manchester, UK. Her current research interests include fuzzy- and neural-fuzzy clustering, neuro-fuzzy modelling, and cognitive behaviour modelling in the area of robotics. Mrs. Nefti is a Full Member of the Informatics Research Institute, a Chartered Member of the British Computer Society, and a member of the IEEE Computer Society. She is a member of the international program committees of several conferences and is an active member of the European Network for the Advancement of Artificial Cognition Systems. Umar Manzoor received the B.S. degree in Computer Science, the M.S. degree in Computer Science from National University of Computer and Emerging Sciences, and the Ph.D. degree in Multi-Agent Systems from the University of Salford, Manchester, UK, in 2003, 2005, and 2011, respectively. In Feb 2006, he joined the National University of Computer and Emerging Sciences, Islamabad, Pakistan, as a Lecturer and promoted after as an Assistant Professor. In Aug 2012, he was promoted as an Associate Professor; currently he is working at King Abdulaziz University, Jeddah, Saudi Arabia. He has published extensively in the area of multi-agent systems, autonomous systems, behaviour monitoring, network management/monitoring which appeared in journals such as Expert Systems with Applications, Applied Soft Computing, Data and Knowledge Engineering and Journal of Network and Computer Applications. Steve Davis graduated from the University of Salford with a degree in Robotic and Electronic Engineering in 1998, and an M.Sc. in Advanced Robotics in 2000. He then became a Research Fellow gaining his Ph.D. in 2005 before moving to the Italian Institute of Technology in 2008. He returned to Salford in 2012 as a Lecturer in Manufacturing Automation and Robotics.
Publisher Copyright:
© 2014 Elsevier B.V. All rights reserved.
Keywords
- Feature extraction
- Human-robot interaction
- Object recognition
- Ontology enhancement
- Wavelet decomposition