Conference item icon

Conference item

Feature-guided black-box safety testing of deep neural networks

Abstract:

Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed
Version:
Publisher's version

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Computer Science
Oxford college:
Trinity College
Publisher:
Springer Verlag Publisher's website
Volume:
10805
Pages:
408-426
Series:
Lecture Notes in Computer Science
Publication date:
2018-04-12
Acceptance date:
2017-12-21
EISSN:
1611-3349
ISSN:
0302-9743
Pubs id:
pubs:825677
URN:
uri:db3101eb-04d0-42f1-8805-b40f2670f024
UUID:
uuid:db3101eb-04d0-42f1-8805-b40f2670f024
Local pid:
pubs:825677
ISBN:
978-3-319-89959-6

Terms of use


Metrics



If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP