Journal article icon

Journal article

The need for an empirical research program regarding human–AI relational norms

Abstract:
As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed by different norms: For example, how two strangers—versus two friends or colleagues—should interact when faced with a similar coordination problem often differs. How will the rise of ‘social’ artificial intelligence (and ultimately, superintelligent AI) complicate people’s expectations about the cooperative norms that should govern different types of relationships, whether human–human or human–AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people’s cooperative expectations may pull apart between human–human and human–AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people’s relationship–specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship–specific cooperative norms we should adopt for human–AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publisher copy:
10.1007/s43681-024-00631-2

Authors


More by this author
Institution:
University of Oxford
Division:
MSD
Department:
Psychiatry
Role:
Author
ORCID:
0000-0002-5944-0209
More by this author
Institution:
University of Oxford
Role:
Author
More by this author
Institution:
University of Oxford
Division:
HUMS
Department:
Philosophy
Oxford college:
St Cross College
Role:
Author
ORCID:
0000-0003-1691-6403
More by this author
Institution:
University of Oxford
Division:
HUMS
Department:
Theology and Religion
Role:
Author
ORCID:
0000-0001-9691-2888


More from this funder
Funder identifier:
https://ror.org/03cpyc314
Grant:
AISG3-GV-2023-012
Programme:
AI Singapore Programme
More from this funder
Funder identifier:
https://ror.org/029chgv08
Grant:
203132/Z/16/Z


Publisher:
Springer Nature
Journal:
AI and Ethics More from this journal
Volume:
5
Issue:
1
Pages:
71–80
Publication date:
2025-01-09
Acceptance date:
2024-11-16
DOI:
EISSN:
2730-5961
ISSN:
2730-5953


Language:
English
Keywords:
Pubs id:
2077127
Local pid:
pubs:2077127
Deposit date:
2025-01-30

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP