ALGORITHM OF NEURON NETWORKS MODIFICATION

Authors

  • O. I. Chumachenko National Technical University of Ukraine “Ihor Sikorsky Kyiv Polytechnic Institute”

DOI:

https://doi.org/10.18372/1990-5548.56.12936

Keywords:

Neuron networks, optimization problem, hybrid multicriteria evolutionary algorithm, method of steepest descent, algorithm of merging and growing

Abstract

It is considered a problem of neuron network modification whose topology has been chosen previously as a result of optimization problem solution for given task. The proposed modification algorithm is based on two-stages procedure which consists of genetic algorithm and local algorithm of optimization. The problem of modification is represented as two tasks: the search of optimal neuron network structure and weight coefficients adjustment. For the solution of these two problems it is used two-stages algorithm, in which at the first stage it is applied hybrid multicriteria evolutionary algorithm and at the second stage it is determined values of weight coefficients with help of back propagation error method and method of steepest descent. The determination of optimal values of hidden layers quantity is executed with help of adaptive algorithm of merging and growing.

Author Biography

O. I. Chumachenko, National Technical University of Ukraine “Ihor Sikorsky Kyiv Polytechnic Institute”

Technical Cybernetic Department

Candidate of Science (Engineering). Assosiate Professor

References

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing, vol. I, D. E. Rumelhart and J. L. McClelland, Eds. Cambridge, MA: MIT Press, 1986, pp. 318–362.

T. Y. Kwok and D. Y. Yeung, “Constructive algorithms for structure learning in feedforward neural networks for regression problems,” IEEE Trans. Neural Netw., vol. 8, no. 3, pp. 630–645, May 1997.

R. Reed, “Pruning algorithms – A survey,” IEEE Trans. Neural Netw., vol. 4, no. 5, pp. 740–747, Sep. 1993.

F. Girosi, M. Jones, and T. Poggio, “Regularization theory and neural networks architectures,” Neural Comput., vol. 7, no. 2, pp. 219–269, Mar. 1995.

J. H. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor, MI: Univ. Michigan Press, 1975.

L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence Through Simulated Evolution. New York: Wiley, 1966.

D. B. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. New York: IEEE Press, 1995.

Md. Monirul Islam, Md. Abdus Sattar, Md. Faijul Amin, Xin Yao, Fellow, IEEE, and Kazuyuki Murase, “A New Adaptive Merging and Growing Algorithmfor Designing Artificial Neural Networks,” IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, vol. 39, no. 3, June 2009, pp. 705–709.

V. M. Sineglazov, O. I. Chumachenko, and D. Koval, "Improvement of the Hybrid Genetic Algorithm for

the Deep Neural Networks Synthesis," IV International Scientific and Practical Conference "Computing Intellect" (Kyiv, May 16-18, 2017), pp. 142–143.

Downloads

Issue

Section

COMPUTER-AIDED DESIGN SYSTEMS