Thesis icon

Thesis

Developing trustworthy language models with consistent predictions

Abstract:

Transformer-based Pre-trained Language Models (PLMs), which are trained with a vast amount of natural language text, have significantly propelled the rapid progress in the field of Natural Language Processing (NLP). These models have been effectively employed across diverse downstream tasks in various ways, such as fine-tuning or few-shot learning. Notably, they have demonstrated promising performance, even surpassing human capabilities in several downstream tasks. Derived from these outstand...

Expand abstract

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Author

Contributors

Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Supervisor
ORCID:
0000-0002-7644-1668
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Supervisor


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford


Language:
English
Keywords:
Subjects:
Deposit date:
2025-04-14

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP