Conference item
Safety verification of deep neural networks
- Abstract:
-
Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural network...
Expand abstract
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Accepted manuscript, pdf, 6.5MB)
-
- Publisher copy:
- 10.1007/978-3-319-63387-9_1
Authors
Funding
Bibliographic Details
- Publisher:
- Springer Publisher's website
- Journal:
- 29th International Conference on Computer Aided Verification (CAV-2017) Journal website
- Host title:
- 29th International Conference on Computer Aided Verification (CAV-2017)
- Publication date:
- 2017-07-01
- Acceptance date:
- 2017-05-05
- DOI:
- Source identifiers:
-
693886
Item Description
- Pubs id:
-
pubs:693886
- UUID:
-
uuid:174149e2-847b-47fb-b342-d7420d69daa8
- Local pid:
- pubs:693886
- Deposit date:
- 2017-05-09
Terms of use
- Copyright holder:
- Springer International Publishing
- Copyright date:
- 2017
- Notes:
- © Springer International Publishing AG 2017
If you are the owner of this record, you can report an update to it here: Report update to this record