Thesis
Automated and verified deep learning
- Abstract:
-
In the last decade, deep learning has enabled remarkable progress in various fields such as image recognition, machine translation, and speech recognition. We are also witnessing an explosion in the range of applications. However, there are many challenges that stand in the way of the widespread deployment of deep learning. In this thesis, we focus on two of the key challenges, namely, neural network verification and automated machine learning.
Firstly, deep neural networks are infamous for being `black boxes' and making unexpected mistakes. For reliable AI, we want systems that are consistent with specifications like fairness, unbiasedness and robustness. We focus on verifying the adversarial robustness of neural networks, which aims at proving the existence or non-existence of an adversarial example. This non-convex problem is commonly approximated with a convex relaxation. We make two important contributions in this direction. First, we propose a specialised dual solver for a new convex relaxation. This was essential because although the relaxation is tighter than previous relaxations, it has an exponential number of constraints that make the existing dual solvers inapplicable. Second, we design a tighter relaxation for the problem of verifying robustness to input perturbations within the probability simplex. The size of our relaxation is linear in the number of neurons, which enables us to design simpler and efficient algorithms. Empirically, we demonstrate the performance by verifying the respective specifications on common verification benchmarks.
Secondly, deep neural networks require extensive human effort and expertise. We consider automated machine learning or meta learning which aims at automating the process of applying machine learning. We make three contributions in this context. First, we propose efficient approximations for the bi-level formulation of meta learning. We show its efficiency in the context of learning to generate synthetic data for training neural networks by optimizing state-of-the-art photorealistic renderers. Second, we propose a technique to automatically optimize the learning rate of gradient-based meta learning algorithms. We demonstrate a substantial reduction in the need to tune training hyperparameters. Third, we show an application by tackling video segmentation as a meta learning problem and demonstrating state-of-the-art results on common benchmarks.
Actions
- DOI:
- Type of award:
- DPhil
- Level of award:
- Doctoral
- Awarding institution:
- University of Oxford
- Language:
-
English
- Keywords:
- Subjects:
- Deposit date:
-
2022-03-08
Terms of use
- Copyright holder:
- Behl, HS
- Copyright date:
- 2021
If you are the owner of this record, you can report an update to it here: Report update to this record