Thesis icon

Thesis

Between integrals and optima: new methods for scalable machine learning

Abstract:

The success of machine learning is due in part to the effectiveness of scalable computational methods, like stochastic gradient descent or Monte Carlo methods, that undergird learning algorithms. This thesis contributes four new scalable methods for distinct problems that arise in machine learning. It introduces a new method for gradient estimation in discrete variable models, a new objective for maximum likelihood learning in the presence of latent variables, and two new gradient-based differentiable optimization methods. Although quite different, these contributions address distinct, critical parts of a typical machine learning workflow. Furthermore, each contribution is inspired by an interplay between the numerical problems of optimization and integration, an interplay that forms the central theme of this thesis.

Actions


Access Document


Files:

Authors


More by this author
Division:
MPLS
Department:
Statistics
Role:
Author

Contributors

Role:
Supervisor
ORCID:
0000-0002-7662-419X
Role:
Supervisor


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Language:
English
Keywords:
Subjects:
Deposit date:
2020-08-11

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP