Abstract
In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.
Original language | English |
---|---|
Title of host publication | Proceedings of the 13th IEEE Conference on Industrial Electronics and Applications, ICIEA 2018 |
Publisher | IEEE |
Pages | 2811-2816 |
Number of pages | 6 |
ISBN (Electronic) | 9781538637579 |
DOIs | |
Publication status | Published - 26 Jun 2018 |
Event | 13th IEEE Conference on Industrial Electronics and Applications, ICIEA 2018 - Wuhan, China Duration: 31 May 2018 → 2 Jun 2018 |
Publication series
Name | Proceedings of the 13th IEEE Conference on Industrial Electronics and Applications, ICIEA 2018 |
---|
Conference
Conference | 13th IEEE Conference on Industrial Electronics and Applications, ICIEA 2018 |
---|---|
Country/Territory | China |
City | Wuhan |
Period | 31/05/18 → 2/06/18 |
Bibliographical note
Publisher Copyright:© 2018 IEEE.
Keywords
- 3D Modelling
- CAD
- Gesture
- MMIS
- Object Manipulation
- Speech