Abstract
Gesturing provides an alternative interaction input for design that is more natural and intuitive. However, standard input devices do not completely reflect natural hand motions in design. A key challenge lies in how gesturing can contribute to human–computer interaction, as well as understanding the patterns in gestures. This paper aims to analyze human gestures to define a gesture vocabulary for descriptive mid-air interactions in a virtual reality environment. We conducted experiments with twenty participants describing two chairs (simple and abstract) with different levels of complexity. This paper presents a detailed analysis of gesture distribution and hand preferences for each description task. Comparisons are drawn between the proposed approach to the definition of a vocabulary using combined gestures (GestAlt) and previously suggested methods. The findings state that GestAlt is successful in describing the employed gestures in both tasks (60% of all gestures for simple chair and 69% for abstract chair). The findings can be applied to the development of an intuitive mid-air interface using gesture recognition.
Original language | English |
---|---|
Pages (from-to) | 11-22 |
Number of pages | 12 |
Journal | Cognition, Technology and Work |
Volume | 20 |
Issue number | 1 |
Early online date | 7 Nov 2017 |
DOIs | |
Publication status | Published - 1 Feb 2018 |
Bibliographical note
Publisher Copyright:© 2017, Springer-Verlag London Ltd., part of Springer Nature.
Keywords
- Gesture recognition
- Gesture vocabulary
- Human–computer interaction
- Interface design
- User-centered design
- Virtual reality