Automated adjustment system of restricted Boltzmann machine
DOI:
https://doi.org/10.18372/1990-5548.60.13814Keywords:
Deep Believe Network, Restricted Boltzmann Machine, Contrastive Divergence, Persistent Contrastive Divergence, Parallel TemperinAbstract
In this paper the problem of learning the deep believe neural network with help of a restricted Boltzmann machine and the choose of an optimal algorithm for its training is considered. Different algorithms of restricted Boltzmann machine training, which are used for the pre-training of deep believe neural network, are considered, in order to increase the efficiency of this network and further solve the problem of structural-parametric synthesis of deep believe neural network. This task represents the task of justifying the necessity of optimal choice of the restricted Boltzmann machine adjustment algorithm for improving the quality of training of the neural network. To solve this problem, it is suggested to create an automated adjustment system of restricted Boltzmann machine, which choose the optimal training algorithm for this neural network.References
. Golovko, “From multilayer perceptron to deep belief neural networks: training paradigms and application,” in book Lectures on neuroinformatics, Moscow, 2015, pp. 47–84. (in Russian)
G. E. Hinton and S. Osindero, “A fast learning algorithm for deep belief nets,” Neural computation № 18, 2006, pp. 1527–1554.
G. E. Hinton, R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, no. 313 (5786), 2006, pp. 504–507.
G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Computation, pp. 1771–1800, 2002.
Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, and U. Montreal, “Greedy layerwise training of deep networks,” in: B. SchЁolkopf, J. Platt, T. Hoffman (eds.) Advances in Neural Information Processing, pp. 153–160, MIT Press, 2007.
Y. Bengio and O. Delalleau, “Justifying and generalizing contrastive divergence”, Neural Computation 21(6), pp. 1601–1621, 2009.
G. E. Hinton, “Learning multiple layers of representation,” Trends in Cognitive Sciences, 11(10), 428–434, 2007.
T. Tieleman, “Training restricted Boltzmann machines using approximations to the likelihood gradient,” in International Conference on Machine learning (ICML), pp. 1064–1071, 2008.
L. Younes, “Maximum likelihood estimation of Gibbs fields,” in Joint Conference on Spacial Statistics and Imaging, Lecture Notes Monograph Series, Institute of Mathematical Statistics, Hayward, California, 1991.
T. Tieleman and G. E. Hinton, “Using fast weights to improve persistent contrastive divergence,” International Conference on Machine Learning (ICML), pp. 1033–1040, 2009.
A. Fischer and C. Igel, “Empirical analysis of the divergence of Gibbs sampling based learning algorithms for Restricted Boltzmann Machines,” in International Conference on Artificial Neural Networks (ICANN), vol. 6354 of LNCS, pp. 208–217, Springer-Verlag, 2010.
K. Cho, T. Raiko and A. Ilin, “Parallel tempering is efficient for learning restricted Boltzmann machines,” Proceedings of the International Joint Conference on Neural Networks (IJCNN 2010), pp. 3246–3253. IEEE Press, 2010.
D. J. C. Mackay, Information Theory, Inference & Learning Algorithms, 1st ed. Cambridge University Press, June 2002.
Bengio Y., “Learning deep architectures for AI,” Foundations and trends in machine learning, no. 2(1), pp. 1–127, 2009.
H. Robert, Swendsen and Jian-Sheng Wang, “Replica Monte Carlo Simulation of Spin-Glasses”, Lett. 57, 2607, 24 November 1986.
Geyer and J. Charles, Markov Chain Monte Carlo Maximum Likelihood, 1991.
Downloads
Issue
Section
License
Authors who publish with this journal agree to the following terms:
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).