Journal article
Mean field analysis of deep neural networks
- Abstract:
- We analyze multilayer neural networks in the asymptotic regime of simultaneously (a) large network sizes and (b) large numbers of stochastic gradient descent training iterations. We rigorously establish the limiting behavior of the multilayer neural network output. The limit procedure is valid for any number of hidden layers, and it naturally also describes the limiting behavior of the training loss. The ideas that we explore are to (a) take the limits of each hidden layer sequentially and (b) characterize the evolution of parameters in terms of their initialization. The limit satisfies a system of deterministic integro-differential equations. The proof uses methods from weak convergence and stochastic analysis. We show that, under suitable assumptions on the activation functions and the behavior for large times, the limit neural network recovers a global minimum (with zero loss for the objective function).
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Accepted manuscript, 451.8KB, Terms of use)
-
- Publisher copy:
- 10.1287/moor.2020.1118
Authors
- Publisher:
- INFORMS
- Journal:
- Mathematics of Operations Research More from this journal
- Volume:
- 47
- Issue:
- 1
- Pages:
- 120-152
- Publication date:
- 2021-04-21
- Acceptance date:
- 2020-09-24
- DOI:
- EISSN:
-
1526-5471
- ISSN:
-
0364-765X
- Language:
-
English
- Keywords:
- Pubs id:
-
1140068
- Local pid:
-
pubs:1140068
- Deposit date:
-
2020-10-28
Terms of use
- Copyright holder:
- INFORMS
- Copyright date:
- 2021
- Rights statement:
- Copyright © 2021, INFORMS.
- Notes:
- This is the accepted manuscript version of the article. The final version is available online from INFORMS at: https://doi.org/10.1287/moor.2020.1118
If you are the owner of this record, you can report an update to it here: Report update to this record