Conference item
Explaining explanations in AI
- Abstract:
- Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Accepted manuscript, pdf, 736.1KB, Terms of use)
-
- Publisher copy:
- 10.1145/3287560.3287574
Authors
- Publisher:
- Association for Computing Machinery
- Host title:
- FAT* '19 Proceedings of the Conference on Fairness, Accountability, and Transparency
- Journal:
- ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) More from this journal
- Pages:
- 279-288
- Publication date:
- 2019-01-29
- Acceptance date:
- 2018-10-13
- DOI:
- ISBN:
- 9781450361255
- Keywords:
- Pubs id:
-
pubs:937081
- UUID:
-
uuid:f6049f9a-bfae-4694-800a-7b07a5e92a67
- Local pid:
-
pubs:937081
- Source identifiers:
-
937081
- Deposit date:
-
2018-11-04
Terms of use
- Copyright holder:
- Mittelstadt et al
- Copyright date:
- 2019
- Notes:
- Copyright © 2019 Mittelstadt et al. This is the accepted manuscript version of the article. The final version is available online from Association for Computing Machinery at: https://doi.org/10.1145/3287560.3287574
If you are the owner of this record, you can report an update to it here: Report update to this record