Thesis icon

Thesis

Probabilistic numerics: Bayesian quadrature and human-AI collaboration

Abstract:
While machine learning for science has garnered significant attention, existing approaches often require well-defined and error-free expert inputs or reduce experts to mere data providers for the machine. However, at the forefront of scientific advancement, even human experts face uncertainties in their processes, necessitating a balanced collaboration between humans and algorithms. In this thesis, we view science as a human endeavor to update scientists' beliefs based on objective evidence, while algorithms represent encoded opinions.

This thesis investigates aligning algorithms with human beliefs and desiderata through Probabilistic Numerics, a principled framework for tasks such as black-box optimization, integration, and inference. It employs computational agents that address these tasks as machine learning problems using diverse policies. Within this framework, the alignment challenge translates into the efficient synchronization of computational and human agents at both the policy and modelling levels.

From a policy perspective, we focus on Bayesian quadrature as a unified solver. We conceptualize this solver as Bayesian data compression, which compresses datasets into smaller, representative points while propagating their (un)certainty in the distributional estimate. This perspective unifies diverse tasks and policies by framing them as differences in target (belief) distributions. This unification simplifies policy alignment while enhancing flexibility and adaptability across various tasks, including approximate Bayesian inference (Chapter 3), Bayesian optimization and active learning (Chapter 4), applications in battery control problems (Chapter 5), time-series forecasting, and Bayesian continual learning.

On the modelling side, we developed more efficient communication between computational agents and human users to align their beliefs. This involves aligning algorithms with humans while also helping humans align with algorithmic beliefs. The former requires algorithmic methods to elicit human beliefs faithfully, including their uncertainties, while the latter involves algorithmic explanations to convey the algorithm's current knowledge to humans. We addressed these challenges using economic approaches, particularly expected utility theory, encompassing prior elicitation and algorithmic explanation (Chapter 6), as well as connections to information-theoretic approaches (Chapter 7).

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Oxford college:
St Catherine's College
Role:
Author
ORCID:
0000-0003-2580-2280

Contributors

Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Supervisor
ORCID:
0000-0003-1959-012X
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Supervisor
ORCID:
0000-0002-0620-3955
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Examiner
ORCID:
0000-0002-1143-9786
Role:
Examiner


More from this funder
Funding agency for:
Adachi, M
Programme:
Clarendon Scholarship
More from this funder
Funding agency for:
Adachi, M
Programme:
Toshizo Watanabe International Scholarship
More from this funder
Programme:
Oxford Kobe Scholarship


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Language:
English
Keywords:
Subjects:
Deposit date:
2025-09-03

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP