Thesis
Between integrals and optima: new methods for scalable machine learning
- Abstract:
-
The success of machine learning is due in part to the effectiveness of scalable computational methods, like stochastic gradient descent or Monte Carlo methods, that undergird learning algorithms. This thesis contributes four new scalable methods for distinct problems that arise in machine learning. It introduces a new method for gradient estimation in discrete variable models, a new objective for maximum likelihood learning in the presence of latent variables, and two new gradient-based differentiable optimization methods. Although quite different, these contributions address distinct, critical parts of a typical machine learning workflow. Furthermore, each contribution is inspired by an interplay between the numerical problems of optimization and integration, an interplay that forms the central theme of this thesis.
Actions
Authors
- DOI:
- Type of award:
- DPhil
- Level of award:
- Doctoral
- Awarding institution:
- University of Oxford
- Language:
-
English
- Keywords:
- Subjects:
- Deposit date:
-
2020-08-11
Terms of use
- Copyright holder:
- Maddison, C
- Copyright date:
- 2020
If you are the owner of this record, you can report an update to it here: Report update to this record