Conference item icon

Conference item

Safety verification of deep neural networks

Abstract:

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural network...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed
Version:
Accepted manuscript

Actions


Access Document


Files:
Publisher copy:
10.1007/978-3-319-63387-9_1

Authors


More by this author
Department:
Oxford, MPLS, Computer Science
Role:
Author
More by this author
Department:
Trinity College
Role:
Author
More by this author
Department:
Oxford, MPLS, Computer Science
Role:
Author
More by this author
Department:
Oxford, Colleges and Halls, Magdalen College
Role:
Author
Publisher:
Springer Publisher's website
Publication date:
2017-07-05
Acceptance date:
2017-05-05
DOI:
Pubs id:
pubs:693886
URN:
uri:174149e2-847b-47fb-b342-d7420d69daa8
UUID:
uuid:174149e2-847b-47fb-b342-d7420d69daa8
Local pid:
pubs:693886

Terms of use


Metrics


Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP