Thesis icon

Thesis

Learning distributional uncertainty estimation for semantic segmentation

Abstract:

Given that robots take consequential actions in the real-world, it should be ensured that their deployment, insofar as possible, is safe and trustworthy by design. Accordingly, this thesis tackles a problem known as distributional shift, which occurs when a deep learning system is exposed to data that is shifted from the data distribution it was trained on, and can result in unpredictable and unintended deployment scenarios. For the task of semantic segmentation, this thesis investigates how a system can detect when error occurs due to distributional shift in order to prevent these dangerous scenarios.

After a discussion of both the nature of distributional uncertainty, i.e. that which causes error due to distributional shift, and the existing literature, this thesis presents three methods that perform distributional uncertainty estimation alongside semantic segmentation for driving data.

The first method poses the problem as a large-scale out-of-distribution detection problem, where a large-scale image dataset is used to train a segmentation neural network to separate in-distribution and out-of-distribution training instances. The training method for this involves a contrastive loss function and a data augmentation procedure that reduces the difference in appearance between in-distribution and out-of-distribution instances.

The second method takes learnings from the first, in that it uses out-of-distribution training images that are inherently less distributionally-shifted from the in-distribution images, rather than relying on data augmentation. This makes the task of separating them more challenging, and therefore the learned uncertainty estimation more robust. For this reason, this method is designed to use an unlabelled distributionally-shifted driving dataset and proposes a training procedure to account for the lack of labels.

Finally, the third method combines ideas from the previous two approaches by using both large-scale image data to learn a general feature representation and an unlabelled distributionally-shifted driving dataset to tailor this representation to distributional uncertainty estimation for driving images.

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Author

Contributors

Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Supervisor
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Supervisor


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP