On Noise Effect in Semi-supervised Learning
Keywords:data noise, machine learning, semi-supervised learning, support vector machines
The article deals with the problem of noise effect on semi-supervised learning. The goal of this article is to analyze the impact of noise on the accuracy of binary classification models created using three semi-supervised learning algorithms, namely Simple Recycled Selection, Incrementally Reinforced Selection, and Hybrid Algorithm, using Support Vector Machines to build a base classifier. Different algorithms to compute similarity matrices, namely Radial Bias Function, Cosine Similarity, and K-Nearest Neighbours were analyzed to understand their effect on model accuracy. For benchmarking purposes, datasets from the UCI repository were used. To test the noise effect, different amounts of artificially generated randomly-labeled samples were introduced into the dataset using three strategies (labeled, unlabeled, and mixed) and compared to the baseline classifier trained with the original dataset and the classifier trained on the reduced-size original dataset. The results show that the introduction of random noise into the labeled samples decreases classifier accuracy, while a moderate amount of noise in unmarked samples can have a positive effect on classifier accuracy.
P. K. Mallapragada, et al., “SemiBoost: Boosting for semi-supervised learning,” IEEE Trans. Pattern Anal. and Machine Intell., vol. 312, no. 11, pp. 2000–2014, Nov. 2009. https://doi.org/10.1109/TPAMI.2008.235
T.-B. Le and S.-W. Kim, “On incrementally using a small portion of strong unlabeled data for semi-supervised learning algorithms,” Pattern Recognition Letters, vol. 41, pp. 53–64, May 2014. https://doi.org/10.1016/j.patrec.2013.08.026
Thanh-Binh Le, Sang-Woon Kim, “A Hybrid Selection Method of Helpful Unlabeled Data Applicable for Semi-Supervised Learning Algorithm,” IEIE Transactions on Smart Processing & Computing, 3(4), 2014, pp. 234–239. https://doi.org/10.5573/IEIESPC.2014.3.4.234
S. Suthaharan, “Support Vector Machine,” In: Machine Learning Models and Algorithms for Big Data Classification. Integrated Series in Information Systems, vol. 36, pp. 207–235, 2016. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7641-3_9
Orr, Mark JL, Introduction to radial basis function networks, 1996.
Rahutomo, Faisal, Teruaki Kitasuka, and Masayoshi Aritsugi, "Semantic cosine similarity," the 7th International Student Conference on Advanced Science and Technology (ICAST), vol. 4, No. 1, 2012.
Yu, K., Ji, L. & Zhang, X. Kernel, “Nearest-Neighbor Algorithm,” Neural Processing Letters 15, 147–156, 2002. https://doi.org/10.1023/A:1015244902967
G. C. Cawley and N. L. C. Talbot, “Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters,” Journal of Machine Learning Research, vol. 8, pp. 841–861, April 2007.
O. Chapelle, & A. Zien, “Semi-Supervised Classification by Low Density Separation,” In Tenth International Workshop on Artificial Intelligence and Statistics (AISTAT 2005), (2005). https://doi.org/10.7551/mitpress/9780262033589.001. 0001
D. Dua, and C. Graff, UCI Machine Learning Repository, 2019. [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
Authors who publish with this journal agree to the following terms:
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).