Abstract
Estimating the location and orientation of humans is an essential skill for service and assistive robots. To achieve a reliable estimation in a wide area such as an apartment, multiple RGBD cameras are frequently used. Firstly, these setups are relatively expensive. Secondly, they seldom perform an effective data fusion using the multiple camera sources at an early stage of the processing pipeline. Occlusions and partial views make this second point very relevant in these scenarios. The proposal presented in this paper makes use of graph neural networks to merge the information acquired from multiple camera sources, achieving a mean absolute error below 125 mm for the location and 10 degrees for the orientation using low-resolution RGB images. The experiments, conducted in an apartment with three cameras, benchmarked two different graph neural network implementations and a third architecture based on fully connected layers. The software used has been released as open-source in a public repository.
Original language | English |
---|---|
Title of host publication | 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020 |
Publisher | IEEE |
Pages | 827-832 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-7281-6075-7 |
ISBN (Print) | 978-1-7281-6076-4 |
DOIs | |
Publication status | Published - 14 Oct 2020 |
Event | IEEE International Conference on Robot & Human Interactive Communication - Virtual, Italy Duration: 31 Aug 2020 → 4 Sept 2020 Conference number: 29th http://ro-man2020.unina.it |
Publication series
Name | 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020 |
---|
Conference
Conference | IEEE International Conference on Robot & Human Interactive Communication |
---|---|
Abbreviated title | RO-MAN |
Country/Territory | Italy |
City | Virtual |
Period | 31/08/20 → 4/09/20 |
Internet address |
Bibliographical note
CC BY-SA© 2020 The Authors
Keywords
- human tracking
- graph neural networks
- sensorised environments