Thesis icon

Thesis

Self-supervised representation learning for trustworthy ultrasound video analysis

Abstract:

Obstetric ultrasound (US) is a routinely used imaging modality for monitoring fetal development during pregnancy. With the rise of deep learning (DL)-based techniques, there is growing potential to reduce reliance on highly trained sonographers. However, the clinical adoption of DL models remains limited due to concerns around robustness, explainability, and reliability—key components of trustworthiness. This thesis addresses these challenges to advance trustworthy DL-based analysis of fetal US.

Given that manual annotation of US videos by expert sonographers is costly and time-consuming, we first investigate self-supervised learning as a scalable alternative. We propose a novel framework for feature-level local self-supervised learning, combining contrastive learning with Rubik’s cube recovery, cube reconstruction, and a new strategy for generating local contrastive pairs. This method enhances the robustness of learned representations and demonstrates improved performance in two key downstream clinical tasks.

To improve model transparency, we incorporate clinician-driven insights by integrating clinical-driven insights into the explanation process. We define explainability as the model’s ability to capture anatomy-aware knowledge. To evaluate this, we introduce a novel set of quantitative metrics that measure the alignment between learned representations and expert gaze-tracking data. These gaze-guided explanations provide a more clinically meaningful assessment of model behaviour, hence enhancing explainability.

While explainability provides valuable insights, reliable deployment is equally critical in clinical applications. To this end, we introduce the concept of out-of-clinical-distribution (OCD), defined as US frames lacking diagnostically meaningful content. We propose the first method for OCD detection in fetal US, enabling models to filter out irrelevant frames from large, heterogeneous real-time scans. This method enhances the safe and reliable deployment of DL models by ensuring that decisions are based only on clinically significant inputs.

In summary, this thesis presents novel methods to enhance the robustness, explainability, and reliability of DL models for fetal US analysis. All approaches are validated on retrospective clinical data, laying the groundwork for more trustworthy integration of DL technologies in prenatal care and inspiring future research in this domain.

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Author

Contributors

Institution:
University of Birmingham
Role:
Supervisor
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Sub department:
Institute of Biomedical Engineering
Role:
Supervisor


More from this funder
Programme:
Studentship in Health Data Science CDT


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Language:
English
Keywords:
Deposit date:
2025-08-12

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP