FORMATION OF A LEARNING SET FOR THE TASK OF IMAGE PROCESSING

Authors

  • O. I. Chumachenko Technical Cybernetic Department
  • A. T. Kot Technical Cybernetic Department,

DOI:

https://doi.org/10.18372/1990-5548.65.14978

Keywords:

Transfer learning, learning set, convolution neural networks, ensemble topology, image processing

Abstract

The problem of forming a training set for the task of image processing is considered. It is shown that this task is of great importance in the construction of intelligent medical diagnostic systems in which convolution neural networks are used for image processing (results of ultrasound, CT and MRI). Due to the lack of elements of the training sample, it is proposed, on the one hand, to use approaches of artificial data multiplication based on the initial training sample of a fixed volume, and on the other hand, to use methods that reduce the need for large training samples, both through the use of ensemble topology (hybrid neural networks), and by applying the transfer learning approach. An algorithm for the formation of a training set for image processing tasks is developed based on the modification of the initial input information with the calculation of the confidence measure of the obtained sample.

Author Biographies

O. I. Chumachenko, Technical Cybernetic Department

National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

A. T. Kot, Technical Cybernetic Department,

National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

References

K. Vorontsov, “Matematicheskie metody obucheniya po pretsedentam (teoriya obucheniya mashin),” Mathematical Methods of Training on Precedents (the Theory of Machine Learning). Avaliable at: http://www.machinelearning.ru/wiki/images/6/6d/Voron-ML-1.pdf (accessed September 2015).

V. P. Borovnikov, Statistica for Students and Engineers. Moscow: Computer Press, 2011, 301 p. (in Russian)

Yu. B. Kotov, New mathematical approaches to problems of medical diagnostics. Moscow: Edinotorial URSS, 2004, 328 p. (in Russian)

URL: http://www.science-education.ru/113-11527, (date of the application: 18.02.2014).

M. V. Artemenko and A. S. Babkov, “Classification of methods for predicting the behavior of systems,” Modern problems of science and education, no. 6, 2013. URL: www.science-education.ru/113-11527 (date of the application: 19.07.2014).

N. A. Korenevsky and E. B. Ryabkova, "Method for the synthesis of fuzzy decision rules based on information about the geometric structure of multidimensional data," Bulletin of the Voronezh State Technical University, vol. 7, no. 8, pp. 128–137, 2011. (in Russian)

V. P. Borovnikov, Statistica for students and engineers. Moscow: Computer Press, 2011, 301 p. (in Russian)

E. Mangalova and I. Petrun'kina, "Prediction Capacity of Wind Power Plants Based on Non-Parametric Algorithm, K Nearest to Neighbors," Doklady vserossiyskoy nauchnoy konferentsii AIST’2013 [Reports of the All-Russian Scientific Conference AIST’2013]. Ekaterinburg, 2013, pp. 1–8. (in Russian)

Labeled Faces in the Wild. http://vis-www.cs.umass.edu/lfw/.

The Facial Recognition Technology (FERET) Database. http://www.itl.nist.gov/iad/humanid/ feret/feret_master.html

E. Mangalova and I. Petrunkina, "Forecasting the power of wind power plants on the basis of a nonparametric algorithm of k nearest neighbors," Reports of the All-Russian scientific conference AIST'2013. 2013, рр. 1–8. (in Russian)

R. Latypova, Neural networks [Text]. Moscow: LAP Lambert Academic Publishing, 2012, 180 р. (in Russian)

A. Radhakrishna, Frequency-tuned Salient Detection [Electronic resource]. Access mode: http://infoscience.epfl.ch/ (date of the application: 05.05.2019)

O. Canavet and F. Fleuret, “Efficient Sample Mining for Object Detection,” Proceedings of the Asian Conference on Machine Learning (ACML), 2014, pp. 48–63.

Amazon Mechanical Turk. Avaliable at: https://www.mturk.com/mturk/welcome (accessed September 2015)

Michael Z. Zgurovsky, Victor M. Sineglazov, Olena I. Chumachenko, Artificial Intelligence Systems Based on Hybrid Neural Networks, Springer, 2020, 390 p. [Electronic resource]. Access mode: https://link.springer.com/book/10.1007/978-3-030-48453-8. Customer can order it via https://www.springer.com/gp/book/9783030484521

Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks From Overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.

L. Yaeger, R. Lyon, and B. Webb, “Effective Training of a Neural Network Character Classifier for Word Recognition,” NIPS, 1996.

D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber, “Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition,” Neural Computation. vol. 22(12), 2010.

P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis,” Int’l Conf. Document Analysis and Recognition, 2003.

S. V. Kachalin, “Increasing the stability of learning large neural networks by adding small training samples of parent examples, synthesized biometric descendant examples,” Proceedings of the scientific and technical conference of the cluster of Penza enterprises that ensure the security of information technologies. Penza-2014, vol. 9, pp. 32–35. (in Russian)

A. V. Akimov and A. A. Sirota, “Models and algorithms for artificial data propagation for training face recognition algorithms using the Viola–Jones method,” Computer Optics, vol. 40, no. 6, pp. 911–918, 2016. (in Russian)

H. Guo and H. L. Viktor, “Learning from Imbalanced Data Sets with Boosting and Data Generation: The DataBoost IM Approach,” ACM SIGKDD Explorations Newsletter, vol. 6(1), pp. 30–39, 2004.

N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Oversampling Technique,” J. Artificial Intelligence Research, vol. 16, pp. 321–357, 2002.

Nitesh V. Chawla, Aleksandar Lazarevic, Lawrence O. Hall and Kevin W. Bowyer, “SMOTEBoost: Improving Prediction of the Minority Class in Boosting,” in 7th European Conference on Principles and Practice of Knowledge Discovery in Databases–Cavtat-Dubrovnik, Croatia, September 22–26, 2003, pp. 107–119.

He K, Zhang X, Ren S, Sun J (2015a) Deep Residual Learning for Image Recognition. Multimed Tools Appl 77:10437–10453. doi:10.1007/s11042-017-4440-4

Karl Weiss, Taghi M. Khoshgoftaar and DingDing Wang, “A survey of transfer learning,” J Big Data, 3:9, pp. 1–40, 2016. doi:10.1186/s40537-016-0043-6

Jing Zhang, Wanqing Li, and Philip Ogunbona, “Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective,” University of Wollongong, Australia DONG XU, University of Sydney, Australia. arXiv:1705.04396v3 [cs.CV] 20 May 2019.

Downloads

Issue

Section

COMPUTER SCIENCES AND INFORMATION TECHNOLOGIES