Self-organising zooms for decentralised redundancy management in visual sensor networks

Lukas Esterle, Bernhard Rinner, Peter Lewis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.
Original languageEnglish
Title of host publicationProceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015
PublisherIEEE
Pages41-50
Number of pages10
ISBN (Print)978-1-4673-7535-1
DOIs
Publication statusPublished - 2015
Event9th IEEE International Conference on Self-Adaptive and Self-Organizing Systems - Cambridge, MA, United States
Duration: 21 Sep 201525 Sep 2015

Conference

Conference9th IEEE International Conference on Self-Adaptive and Self-Organizing Systems
CountryUnited States
CityCambridge, MA
Period21/09/1525/09/15

Fingerprint

Sensor networks
Redundancy
Cameras
Reinforcement learning
Image quality
Lenses
Pixels
Recovery

Bibliographical note

© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Funding: Austrian Institute of Technology and the Austrian Federal
Ministry of Science, Research and Economy HRSM program.

Keywords

  • visual sensor networks
  • edundancy management
  • decentralised learning
  • self-organisation
  • runtime trade-offs

Cite this

Esterle, L., Rinner, B., & Lewis, P. (2015). Self-organising zooms for decentralised redundancy management in visual sensor networks. In Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015 (pp. 41-50). IEEE. https://doi.org/10.1109/SASO.2015.12
Esterle, Lukas ; Rinner, Bernhard ; Lewis, Peter. / Self-organising zooms for decentralised redundancy management in visual sensor networks. Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015. IEEE, 2015. pp. 41-50
@inproceedings{5a2a2359f4fb4a0ab29367a547dde89c,
title = "Self-organising zooms for decentralised redundancy management in visual sensor networks",
abstract = "When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.",
keywords = "visual sensor networks, edundancy management, decentralised learning, self-organisation, runtime trade-offs",
author = "Lukas Esterle and Bernhard Rinner and Peter Lewis",
note = "{\circledC} 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Funding: Austrian Institute of Technology and the Austrian Federal Ministry of Science, Research and Economy HRSM program.",
year = "2015",
doi = "10.1109/SASO.2015.12",
language = "English",
isbn = "978-1-4673-7535-1",
pages = "41--50",
booktitle = "Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015",
publisher = "IEEE",
address = "United States",

}

Esterle, L, Rinner, B & Lewis, P 2015, Self-organising zooms for decentralised redundancy management in visual sensor networks. in Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015. IEEE, pp. 41-50, 9th IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Cambridge, MA, United States, 21/09/15. https://doi.org/10.1109/SASO.2015.12

Self-organising zooms for decentralised redundancy management in visual sensor networks. / Esterle, Lukas; Rinner, Bernhard; Lewis, Peter.

Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015. IEEE, 2015. p. 41-50.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Self-organising zooms for decentralised redundancy management in visual sensor networks

AU - Esterle, Lukas

AU - Rinner, Bernhard

AU - Lewis, Peter

N1 - © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Funding: Austrian Institute of Technology and the Austrian Federal Ministry of Science, Research and Economy HRSM program.

PY - 2015

Y1 - 2015

N2 - When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.

AB - When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.

KW - visual sensor networks

KW - edundancy management

KW - decentralised learning

KW - self-organisation

KW - runtime trade-offs

UR - http://www.scopus.com/inward/record.url?scp=84959262315&partnerID=8YFLogxK

U2 - 10.1109/SASO.2015.12

DO - 10.1109/SASO.2015.12

M3 - Conference contribution

SN - 978-1-4673-7535-1

SP - 41

EP - 50

BT - Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015

PB - IEEE

ER -

Esterle L, Rinner B, Lewis P. Self-organising zooms for decentralised redundancy management in visual sensor networks. In Proceedings : 2015 IEEE Ninth International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2015. IEEE. 2015. p. 41-50 https://doi.org/10.1109/SASO.2015.12