Conference item
A tutorial on stochastic approximation algorithms for training Restricted Boltzmann Machines and Deep Belief Nets
- Abstract:
- In this study, we provide a direct comparison of the Stochastic Maximum Likelihood algorithm and Contrastive Divergence for training Restricted Boltzmann Machines using the MNIST data set. We demonstrate that Stochastic Maximum Likelihood is superior when using the Restricted Boltzmann Machine as a classifier, and that the algorithm can be greatly improved using the technique of iterate averaging from the field of stochastic approximation. We further show that training with optimal parameters for classification does not necessarily lead to optimal results when Restricted Boltzmann Machines are stacked to form a Deep Belief Network. In our experiments we observe that fine tuning a Deep Belief Network significantly changes the distribution of the latent data, even though the parameter changes are negligible.
Actions
Authors
- Host title:
- Information Theory and Applications Workshop (ITA)
- Publication date:
- 2010-01-01
- DOI:
- UUID:
-
uuid:998bc49a-d82e-4a5c-80eb-e16ed222434f
- Local pid:
-
cs:7469
- Deposit date:
-
2015-03-31
Terms of use
- Copyright date:
- 2010
If you are the owner of this record, you can report an update to it here: Report update to this record