Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences. m+m: Movement + Meaning middleware is an open source software framework that enables users to construct real-time, interactive systems that are based on movement data. The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line. Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reduced latency and increased bandwidth. Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data, machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.
|Title of host publication||MOCO '16|
|Subtitle of host publication||Proceedings of the 3rd International Symposium on Movement and Computing|
|Publication status||Published - 5 Jul 2016|
|Event||3rd International Symposium on Movement and Computing - Thessaloniki, GA, Greece|
Duration: 5 Jul 2016 → 6 Jul 2016
|Conference||3rd International Symposium on Movement and Computing|
|Period||5/07/16 → 6/07/16|
Bibliographical noteCopyright © 2016, ACM. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
- Real-time interaction