Conference item icon

Conference item

Safety verification for deep neural networks with provable guarantees

Abstract:
Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publisher copy:
10.4230/LIPIcs.CONCUR.2019.1

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Oxford college:
Trinity College
Role:
Author


Publisher:
Leibniz International Proceedings in Informatics, LIPIcs
Host title:
International Conference on Concurrency Theory (CONCUR 2019)
Journal:
30th International Conference on Concurrency Theory (CONCUR 2019) More from this journal
Volume:
140
Pages:
1:1-1:5
Publication date:
2019-08-01
Acceptance date:
2019-07-06
DOI:
ISSN:
1868-8969
ISBN:
9783959771214


Keywords:
Pubs id:
pubs:1035778
UUID:
uuid:5866ee47-a875-4c93-bd89-1a9352bfe10f
Local pid:
pubs:1035778
Source identifiers:
1035778
Deposit date:
2019-10-25

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP