Thesis icon

Thesis

On the adversarial robustness of Bayesian machine learning models

Abstract:

Bayesian machine learning (ML) models have long been advocated as an important tool for safe artificial intelligence. Yet, little is known about their vulnerability against adversarial attacks. Such attacks aim to cause undesired model behaviour (e.g. misclassification) by crafting small perturbations to regular inputs which appear to be insignificant to humans (e.g. slight blurring of image data). This fairly recent phenomenon has undermined the suitability of many ML models for deployment...

Expand abstract

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Research group:
Machine Learning Research Group
Oxford college:
Christ Church
Role:
Author

Contributors

Role:
Supervisor
Role:
Supervisor
More from this funder
Name:
Konrad-Adenauer-Stiftung
Funder identifier:
http://dx.doi.org/10.13039/501100004079
Programme:
Promotionsstipendium
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford
Language:
English
Keywords:
Subjects:
Deposit date:
2021-12-22

Terms of use


Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP