Thesis icon

Thesis

Causal inference methods for supporting, understanding, and improving decision-making

Abstract:

Causality and the ability to reason about cause-and-effect relationships are central to decision-making. This thesis contributes to the area of causal inference by proposing new machine learning methods that can be used for supporting, understanding, and improving decision-making, with a focus on the healthcare setting.

Firstly, we introduce several causal inference tools for supporting decision-making by estimating the causal effects of interventions (treatments) from observational data, such as electronic health records. We begin by addressing the under-explored problem of estimating counterfactual outcomes for continuous-valued interventions and propose a method based on generative adversarial networks that achieves state-of-the-art performance and that can help us choose both the correct treatment and dosage for each patient. Then, we shift our attention to the temporal setting where we develop a sequence-to-sequence model that uses domain adversarial training to handle time-dependent confounding and that can help us determine the best sequence of treatments for each patient. Moreover, we introduce the first method that can handle the presence of multi-cause hidden confounders in temporal data. By taking advantage of the dependencies in the treatment assignments over time, our method learns latent variables that can be used as substitutes for the hidden confounders.

Secondly, we integrate counterfactual reasoning into batch inverse reinforcement learning to develop a method for better understanding the decision-making behaviour of experts by modelling their reward functions in terms of preferences over `what-if' (counterfactual) outcomes. We show that this allows us to obtain an interpretable parameterization of the experts' decision-making process and subsequently uncover the trade-offs and preferences associated with their actions.

Thirdly, we improve the robustness of decision-making by proposing a new model for batch imitation learning that incorporates causal structure into the learnt imitation policy. By ensuring that the imitation policy only depends on the causal parents of the actions, we learn a decision-making guideline that is robust to spurious correlations and that generalizes well to new environments.

Overall, this thesis introduces methodological advances in machine learning capable of reasoning about cause-and-effect relationships that can enable us to improve delivery of personalized care for patients, support clinical decision-making and build a more transparent account of clinical practice. We provide a discussion highlighting the challenges of incorporating such methods into practice and include suggestions for future work in this direction.

Actions


Access Document


Files:

Authors


More by this author
Division:
MPLS
Department:
Engineering Science
Role:
Author

Contributors

Role:
Supervisor
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Supervisor
Role:
Examiner
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Examiner


More from this funder
Funder identifier:
http://dx.doi.org/10.13039/100012338
Funding agency for:
Bica, I
Grant:
EP/N510129/1


Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP