Journal article
Deepfake detection with and without content warnings
- Abstract:
- The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 514.9KB, Terms of use)
-
- Publisher copy:
- 10.1098/rsos.231214
Authors
- Publisher:
- The Royal Society
- Journal:
- Royal Society Open Science More from this journal
- Volume:
- 10
- Issue:
- 11
- Article number:
- 231214
- Publication date:
- 2023-11-27
- Acceptance date:
- 2023-11-15
- DOI:
- EISSN:
-
2054-5703
- Language:
-
English
- Keywords:
- Pubs id:
-
1569481
- Local pid:
-
pubs:1569481
- Deposit date:
-
2023-11-22
Terms of use
- Copyright holder:
- Lewis et al.
- Copyright date:
- 2023
- Rights statement:
- © 2023 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License, which permits unrestricted use, provided the original author and source are credited.
- Licence:
- CC Attribution (CC BY)
If you are the owner of this record, you can report an update to it here: Report update to this record