Conference item icon

Conference item

Deep Frank-Wolfe for neural network optimization

Abstract:

Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms. The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm or its adaptive variants. However, SGD requires a hand-designed schedule for the learning rate. In addition, its adaptive variants tend to produce solutions that generalize less well on unseen dat...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed
Version:
Accepted Manuscript

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Oxford college:
Lady Margaret Hall
Role:
Author
Publisher:
International Conference on Learning Representations Publisher's website
Publication date:
2019-02-22
Acceptance date:
2018-12-20
Pubs id:
pubs:956242
URN:
uri:ff779222-6bce-4344-b9ce-4ec304443396
UUID:
uuid:ff779222-6bce-4344-b9ce-4ec304443396
Local pid:
pubs:956242

Terms of use


Metrics


Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP