Multimodal Bayesian Network for Artificial Perception

Diego Faria, Cristiano Premebida, Luis J. Manso, Eduardo P. Ribeiro, Pedro Nunez

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification.
Original languageEnglish
Title of host publicationBayesian Networks
PublisherInTech
Number of pages16
DOIs
Publication statusPublished - 5 Nov 2018

Fingerprint

Bayesian networks
Advanced driver assistance systems
Sensors
Electroencephalography
Merging
Robotics
Fusion reactions
Cameras

Bibliographical note

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Keywords

  • Bayesian networks
  • multimodal perception

Cite this

Faria, D., Premebida, C., Manso, L. J., Ribeiro, E. P., & Nunez, P. (2018). Multimodal Bayesian Network for Artificial Perception. In Bayesian Networks InTech. https://doi.org/10.5772/intechopen.81111
Faria, Diego ; Premebida, Cristiano ; Manso, Luis J. ; Ribeiro, Eduardo P. ; Nunez, Pedro. / Multimodal Bayesian Network for Artificial Perception. Bayesian Networks. InTech, 2018.
@inbook{0d282e6ece614fc09f5007ea105711da,
title = "Multimodal Bayesian Network for Artificial Perception",
abstract = "In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification.",
keywords = "Bayesian networks, multimodal perception",
author = "Diego Faria and Cristiano Premebida and Manso, {Luis J.} and Ribeiro, {Eduardo P.} and Pedro Nunez",
note = "{\circledC} 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.",
year = "2018",
month = "11",
day = "5",
doi = "10.5772/intechopen.81111",
language = "English",
booktitle = "Bayesian Networks",
publisher = "InTech",
address = "Croatia",

}

Faria, D, Premebida, C, Manso, LJ, Ribeiro, EP & Nunez, P 2018, Multimodal Bayesian Network for Artificial Perception. in Bayesian Networks. InTech. https://doi.org/10.5772/intechopen.81111

Multimodal Bayesian Network for Artificial Perception. / Faria, Diego; Premebida, Cristiano; Manso, Luis J.; Ribeiro, Eduardo P.; Nunez, Pedro.

Bayesian Networks. InTech, 2018.

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - Multimodal Bayesian Network for Artificial Perception

AU - Faria, Diego

AU - Premebida, Cristiano

AU - Manso, Luis J.

AU - Ribeiro, Eduardo P.

AU - Nunez, Pedro

N1 - © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

PY - 2018/11/5

Y1 - 2018/11/5

N2 - In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification.

AB - In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification.

KW - Bayesian networks

KW - multimodal perception

UR - https://www.intechopen.com/online-first/multimodal-bayesian-network-for-artificial-perception/

U2 - 10.5772/intechopen.81111

DO - 10.5772/intechopen.81111

M3 - Chapter

BT - Bayesian Networks

PB - InTech

ER -