Journal article icon

Journal article

Strategic implications of openness in AI development

Abstract:
This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short‐term impacts of increased openness appear mostly socially beneficial in expectation. The strategic implications of medium and long‐term impacts are complex. The evaluation of long‐term impacts, in particular, may depend on whether the objective is to benefit the present generation or to promote a time‐neutral aggregate of well‐being of future generations. Some forms of openness are plausibly positive on both counts (openness about safety measures, openness about goals). Others (openness about source code, science, and possibly capability) could lead to a tightening of the competitive situation around the time of the introduction of advanced AI, increasing the probability that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. We identify several key factors that must be taken into account by any well‐founded opinion on the matter.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Publisher copy:
10.1111/1758-5899.12403

Authors


More by this author
Institution:
University of Oxford
Division:
HUMS
Department:
Philosophy Faculty
Role:
Author


Publisher:
Wiley
Journal:
Global Policy More from this journal
Volume:
8
Issue:
2
Pages:
135-148
Publication date:
2017-02-09
Acceptance date:
2016-11-25
DOI:
EISSN:
1758-5899
ISSN:
1758-5880


Language:
English
Pubs id:
pubs:667113
UUID:
uuid:83ea712f-aba3-4176-957a-3bb4af0209d6
Local pid:
pubs:667113
Source identifiers:
667113
Deposit date:
2016-12-23

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP