Book section
AlphaGo’s move 37 and its implications for AI-supported military decision-making
- Abstract:
- In a match against Lee Sedol, one of the greatest contemporary Go players, DeepMind’s AI programme AlphaGo played a move which stunned commentators at the time, who described it as ‘unthinkable’, ‘surprising’, ‘a big shock’, and ‘bad’. Move 37 turned out to be key to AlphaGo’s victory in that game, and it displays what I describe as the property of ‘unpredictable brilliance’. Unpredictable brilliance also poses a challenge for a central use case for AI in the military, namely in AI-enabled decision-support systems. Advanced versions of these systems can be expected to display unpredictable brilliance, while also posing risks, both to the safety of blue force personnel and to a military’s likelihood of success in its campaign objectives. The central task of this chapter is to show how the management of these risks will result in the redistribution of responsibility for performance in combat away from commanders, and towards the institutions that design, build, authorise and regulate these AI-enabled systems. Surprisingly, this redistribution of 2 responsibility is structurally akin for systems in which humans are ‘in the loop’ as it is for those in which humans are ‘out’ of it.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
- Publisher:
- Chapman and Hall/CRC
- Host title:
- Responsible Use of AI in Military Systems
- Chapter number:
- 12
- Publication date:
- 2024-04-26
- Acceptance date:
- 2023-06-28
- Edition:
- 1st Edition
- EISBN:
- 978-1-003-41037-9
- ISBN:
- 978-1-032-52430-6
- Language:
-
English
- Pubs id:
-
1489082
- Local pid:
-
pubs:1489082
- Deposit date:
-
2023-07-03
Terms of use
- Copyright holder:
- Thomas Simpson
- Copyright date:
- 2024
- Rights statement:
- © 2024 selection and editorial matter, Jan Maarten Schraagen; individual chapters, the contributors
If you are the owner of this record, you can report an update to it here: Report update to this record