One of the long-term objectives of artificial cognition is that robots will increasingly be capable of interacting with their human counterparts in open-ended tasks that can change over time. To achieve this end, the robot should be able to acquire and internalize new knowledge from human-robot interaction, online. This implies that the robot should attend and perceive the available cues, both verbal and nonverbal, that contain information about the inner qualities of the human counterparts. Social cognition focuses on the perceiver’s ability to build cognitive representations factors (emotions, intentions, ...) and their contexts. These representations should provide meaning to the sensed inputs and mediate the behavioural responses of the robot within this social scenario. This paper describes how the abilities for building such as cognitive representations are currently endowing in the cognitive software architecture RoboCog. It also presents the first set of complete experiments, involving different user profiles. These experiments show the promising possibilities of the proposal, and reveal the main future improvements to be addressed.
|Title of host publication||Proceedings of Workshop of Physical Agents|
|Publication status||Published - 2014|