STRUCTURAL-PARAMETRIC SYNTHESIS OF NEURAL NETWORK ENSEMBLE BASED ON THE ESTIMATION OF INDIVIDUAL CONTRIBUTION
DOI:
https://doi.org/10.18372/1990-5548.59.13642Keywords:
Structural-parametric synthesis, neural networks, ensemble, individual contribution, classificationAbstract
The article presents the structural-parametric synthesis of an ensemble of neural networks of various architectures based on their individual contribution. Topologies and learning algorithms for each classifier are considered. It is described the algorithm for calculating the individual contribution of each network and the algorithm for selecting networks in the ensemble according to the criteria of accuracy and diversity. In order to simplify the structure of the ensemble, the Complementary Measure method was used. The results of learning of classifiers on training bootstrap samples are presented. The obtained results of the ensemble are compared with the corresponding results of each neural network included in the ensemble separately.
References
Christopher M. Bishop, “Feed-forward Network Functions,” in Pattern Recognition and Machine Learning, Springer, 2006, pp. 227–232.
S. Nikolenko, A. Kadurin, and E. Archangelskaya, "Preliminaries, or the course of the young fighter," in Deep Learning, SPB.: Piter, 2018, ch. 2.3, pp. 63–69.
D. Rumelhart, G. Hinton, and R. Williams, "Learning representations by back-propagating errors," Nature, 1986, vol. 323, pp. 533–536.
Diederik P. Kingma, and Ba Jimmy, Adam: A Method for Stochastic Optimization. [Online]. Available: https://arxiv.org/abs/1412.6980.
D. S. Broomhead and David Lowe, “Radial basis functions, multi-variable functional interpolation and adaptive networks,” Royal signals and radar establishment, United Kingdom, 1988.
Е. V. Bodyanskiy and О. G. Rudenko, "Radial basis networks," in Artificial neural networks: architecture, training, applications, pp. 35–40.
David MacKay, “Chapter 20. An Example Inference Task: Clustering” in Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003, pp. 284–292.
Е. V. Bodyanskiy and О. G. Rudenko, "Counter propagatiton neural networks," in Artificial neural networks: architecture, training, applications, pp. 275–281.
V. V Kruglov and V. V. Borisov, "Basic concepts of neural networks," in Artificial neural networks. Theory and practice. 2d ed., 2002, ch. 2.3, pp. 58–63.
D. F. Specht, "Probabilistic neural networks," in Neural Networks, vol. 3, pp. 109–118.
Е. V. Bodyanskiy and О. G. Rudenko, "Probabilistic neural networks," in Artificial neural networks: architecture, training, applications, pp. 176–179.
Y. P. Zaychenko, "Fuzzy neural networks in classification tasks," in Fuzzy models and methods in intelligent systems. Кyiv: Izdatelskiy dom "Slovo", 2008, pp 156–194.
Domingos Pedro, Michael Pazzani, “On the optimality of the simple Bayesian classifier under zero-one loss,” in Machine Learning, 1997, pp. 103–137.
Downloads
Issue
Section
License
Authors who publish with this journal agree to the following terms:
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).