Thesis icon

Thesis

Sample complexity of robust learning against evasion attacks

Abstract:

It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. One of the fundamental problems in adversarial machine learning is to quantify how much training data is needed in the presence of so-called evasion attacks, where data is corrupted at test time. In this thesis, we work with the exact-in-the-ball notion of robustness and study the feasibility of adversarially robust learning from the perspective of learning theory, consi...

Expand abstract

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Author

Contributors

Institution:
University of Oxford
Role:
Supervisor
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Supervisor
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Supervisor
ORCID:
0000-0001-9022-7599
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Examiner
Role:
Examiner


More from this funder
Funder identifier:
http://dx.doi.org/10.13039/100010663
Grant:
FUN2MODEL, grant agreement No. 834115
More from this funder
Funder identifier:
http://dx.doi.org/10.13039/501100014748
Programme:
Clarendon Scholarship
More from this funder
Funder identifier:
http://dx.doi.org/10.13039/501100000038
Programme:
Postgraduate Scholarship


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP