Journal article icon

Journal article

Deepfake detection with and without content warnings

Abstract:
The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publisher copy:
10.1098/rsos.231214

Authors


More by this author
Institution:
University of Oxford
Division:
SSD
Department:
Politics & Int Relations
Role:
Author
ORCID:
0000-0002-6224-2828
More by this author
Institution:
University of Oxford
Oxford college:
Nuffield College
Role:
Author
ORCID:
0000-0002-1166-7674


Publisher:
The Royal Society
Journal:
Royal Society Open Science More from this journal
Volume:
10
Issue:
11
Article number:
231214
Publication date:
2023-11-27
Acceptance date:
2023-11-15
DOI:
EISSN:
2054-5703


Language:
English
Keywords:
Pubs id:
1569481
Local pid:
pubs:1569481
Deposit date:
2023-11-22

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP