Journal article
Bayesian learning via stochastic gradient langevin dynamics
- Abstract:
-
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an inbuilt protection against overfitting. We also propose a practica...
Expand abstract
Actions
Authors
Bibliographic Details
- Journal:
- Proceedings of the 28th International Conference on Machine Learning, ICML 2011
- Pages:
- 681-688
- Publication date:
- 2011-01-01
Item Description
- Language:
- English
- Pubs id:
-
pubs:353219
- UUID:
-
uuid:13ec7031-519b-4223-81cb-dd8de9213836
- Local pid:
- pubs:353219
- Source identifiers:
-
353219
- Deposit date:
- 2013-11-16
Terms of use
- Copyright date:
- 2011
Metrics
If you are the owner of this record, you can report an update to it here: Report update to this record