Journal article
Conversational Alignment With Artificial Intelligence in Context
- Abstract:
- The development of sophisticated artificial intelligence (AI) conversational agents based on large language models raises important questions about the relationship between human norms, values, and practices and AI design and performance. This article explores what it means for AI agents to be conversationally aligned to human communicative norms and practices for handling context and common ground and proposes a new framework for evaluating developers’ design choices. We begin by drawing on the philosophical and linguistic literature on conversational pragmatics to motivate a set of desiderata, which we call the CONTEXT‐ALIGN framework, for conversational alignment with human communicative practices. We then suggest that current large language model (LLM) architectures, constraints, and affordances may impose fundamental limitations on achieving full conversational alignment.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 312.6KB, Terms of use)
-
- Publisher copy:
- 10.1111/phpe.12205
Authors
- Publisher:
- Wiley
- Journal:
- Philosophical Perspectives More from this journal
- Publication date:
- 2025-05-29
- DOI:
- EISSN:
-
1758-2245
- ISSN:
-
1520-8583
- Language:
-
English
- Keywords:
- Source identifiers:
-
2984120
- Deposit date:
-
2025-05-30
This ORA record was generated from metadata provided by an external service. It has not been edited by the ORA Team.
If you are the owner of this record, you can report an update to it here: Report update to this record