Abstract

The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.
Original languageEnglish
Pages (from-to)30-37
Number of pages8
JournalIEEE Technology and Society Magazine
Volume44
Issue number3
Early online date12 Sept 2025
DOIs
Publication statusPublished - 12 Sept 2025

Bibliographical note

Copyright © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Fingerprint

Dive into the research topics of 'What’s It Like to Trust an LLM: The Devolution of Trust Psychology?'. Together they form a unique fingerprint.

Cite this