Conference item icon

Conference item

Position: social choice should guide AI alignment in dealing with diverse human feedback

Abstract:
Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about “collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publication website:
https://proceedings.mlr.press/v235/conitzer24a.html

Authors


More by this author
Institution:
University of Oxford
Division:
HUMS
Department:
Philosophy
Role:
Author


Publisher:
PMLR
Host title:
Proceedings of the 41st International Conference on Machine Learning
Pages:
9346-9360
Series:
Proceedings of Machine Learning Research
Series number:
235
Publication date:
2024-07-08
Acceptance date:
2024-04-21
Event title:
41st International Conference on Machine Learning (ICML 2024)
Event location:
Vienna, Austria
Event website:
https://icml.cc/Conferences/2024
Event start date:
2024-07-21
Event end date:
2024-07-27
EISSN:
2640-3498
ISSN:
2640-3498


Language:
English
Pubs id:
2036178
Local pid:
pubs:2036178
Deposit date:
2024-11-26

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP