Journal article
The need for an empirical research program regarding human–AI relational norms
- Abstract:
- As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed by different norms: For example, how two strangers—versus two friends or colleagues—should interact when faced with a similar coordination problem often differs. How will the rise of ‘social’ artificial intelligence (and ultimately, superintelligent AI) complicate people’s expectations about the cooperative norms that should govern different types of relationships, whether human–human or human–AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people’s cooperative expectations may pull apart between human–human and human–AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people’s relationship–specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship–specific cooperative norms we should adopt for human–AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 867.1KB, Terms of use)
-
(Preview, Other, pdf, 400.9KB, Terms of use)
-
- Publisher copy:
- 10.1007/s43681-024-00631-2
Authors
+ National Research Foundation, Singapore
More from this funder
- Funder identifier:
- https://ror.org/03cpyc314
- Grant:
- AISG3-GV-2023-012
- Programme:
- AI Singapore Programme
+ Wellcome Trust
More from this funder
- Funder identifier:
- https://ror.org/029chgv08
- Grant:
- 203132/Z/16/Z
- Publisher:
- Springer Nature
- Journal:
- AI and Ethics More from this journal
- Volume:
- 5
- Issue:
- 1
- Pages:
- 71–80
- Publication date:
- 2025-01-09
- Acceptance date:
- 2024-11-16
- DOI:
- EISSN:
-
2730-5961
- ISSN:
-
2730-5953
- Language:
-
English
- Keywords:
- Pubs id:
-
2077127
- Local pid:
-
pubs:2077127
- Deposit date:
-
2025-01-30
Terms of use
- Copyright holder:
- Reinecke et al
- Copyright date:
- 2025
- Rights statement:
- © 2025. The Authors. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
- Notes:
- A correction to this article is available online from Springer Nature at: https://doi.org/10.1007/s43681-025-00659-y
- Licence:
- CC Attribution (CC BY)
If you are the owner of this record, you can report an update to it here: Report update to this record