Abstract
Natural language processing and deep machine learning enable VCAs to understand, process, and respond to users’ utterances in real-time. Users can talk with VCAs in a human-like way and VCAs are able to engage in dialogue with them. This procedure along with all the benefits that users realise are also associate to a potential risk for users of being heard by VCAs and therefore this erodes trust. Nevertheless, VCAs usage is increasing worldwide. Bearing in mind that users are not naïve about privacy issues, this research aims to investigate why people are willing to make themselves vulnerable by using VCAs. We have conducted 31 in-depth interviews with users of Siri, Alexa, and Google Assistants in 5 countries, which illustrate that anthropomorphic features make users perceive gratifications in different form, compared with the interactions experienced with previous machines.
These perceived gratifications cause that users ignore privacy risks and uncertainty, allowing for building trust between humans and machines.
These perceived gratifications cause that users ignore privacy risks and uncertainty, allowing for building trust between humans and machines.
Original language | English |
---|---|
Pages | 608 |
Number of pages | 1 |
Publication status | Unpublished - 5 Dec 2022 |
Event | Australian and New Zealand Marketing Academy - Perth, Australia, Perth, Australia Duration: 5 Dec 2022 → 7 Dec 2022 Conference number: 2023 https://www.anzmac2022.com/program/conference-proceedings/ |
Conference
Conference | Australian and New Zealand Marketing Academy |
---|---|
Abbreviated title | ANZMAC |
Country/Territory | Australia |
City | Perth |
Period | 5/12/22 → 7/12/22 |
Internet address |