Thesis icon

Thesis

A minimalist approach to deep multi-task learning

Abstract:

Multi-task learning is critical for real-life applications of machine learning. Modern approaches are characterised by algorithmic complexity, often unjustified, leading to impractical solutions. In contrast, this thesis demonstrates that a minimalistic alternative is possible, showing the attractiveness of simple methods. 'In defence of the Unitary Scalarisation for Deep Multi-task Learning' motivates the rest of the thesis, showing that none of the more complex multi-task optimisers outperforms the simple per-task gradient summation when compared on fair grounds. Furthermore, it proposes a novel look at multi-task optimisers from the regularisation standpoint. The rest of this thesis focuses on deep reinforcement learning, a general framework for sequential decision-making. In particular, we look at the setting when observations (inputs to the model) are represented as graphs, i.e., collections of interconnected nodes. In 'Scaling GNNs to High-Dimensional Continuous Control' and 'The Role of Morphology in Graph-Based Incompatible Control', we learn a single control policy for agents of different morphology by representing the observation set elements as graphs and deploy graph neural networks (including transformers). In the former chapter, we devise a simple method to scale graph networks by freezing some parts of the network to stabilise learning and prevent overfitting. In the latter chapter, we show that graph connectivity might be suboptimal for the downstream task demonstrating that less-constrained transformers perform significantly better without having the graph connectivity information. Finally, in the 'Generalisable Branching Heuristic for a SAT Solver', we apply multi-task reinforcement learning to Boolean satisfiability, a fundamental problem in academia and industrial applications. We demonstrate that Q-learning, a staple reinforcement learning algorithm equipped with graph neural networks for function approximation, can learn a generalisable branching heuristic.

We hope our findings will steer the further development of the field: creating more complex benchmarks, adding assumptions on task similarities and a model capacity, and exploring other objective functions rather than focusing on the average performance across the tasks.

Actions


Access Document


Files:

Authors


More by this author
Division:
MPLS
Department:
Engineering Science
Role:
Author

Contributors

Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Supervisor


More from this funder
Funding agency for:
Kurin, V
Programme:
Autonomous Intelligent Machines and Systems (EPSRC Centre for Doctoral Training)


DOI:
Type of award:
DPhil
Level of award:
Doctoral
Awarding institution:
University of Oxford

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP