Conference item icon

Conference item

Interpretable explanations of black boxes by meaningful perturbation

Abstract:

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural con...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed
Version:
Accepted Manuscript

Actions


Access Document


Files:
Publisher copy:
10.1109/ICCV.2017.371

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Oxford college:
New College
Role:
Author
Publisher:
IEEE Publisher's website
Pages:
3449-3457
Publication date:
2017-12-25
Acceptance date:
2017-07-17
DOI:
ISSN:
2380-7504
Pubs id:
pubs:821526
URN:
uri:d31f9d61-32da-43d8-878d-b5f934f36a1a
UUID:
uuid:d31f9d61-32da-43d8-878d-b5f934f36a1a
Local pid:
pubs:821526
ISBN:
978-1-5386-1032-9

Terms of use


Metrics


Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP