Conference item icon

Conference item

Safety verification of deep neural networks

Abstract:

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural network...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publisher copy:
10.1007/978-3-319-63387-9_1

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Author
More by this author
Institution:
University of Oxford
Oxford college:
Trinity College
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Author
More by this author
Institution:
University of Oxford
Oxford college:
Magdalen College
Role:
Author
Publisher:
Springer Publisher's website
Journal:
29th International Conference on Computer Aided Verification (CAV-2017) Journal website
Host title:
29th International Conference on Computer Aided Verification (CAV-2017)
Publication date:
2017-07-01
Acceptance date:
2017-05-05
DOI:
Source identifiers:
693886
Pubs id:
pubs:693886
UUID:
uuid:174149e2-847b-47fb-b342-d7420d69daa8
Local pid:
pubs:693886
Deposit date:
2017-05-09

Terms of use


Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP