Optimizing Kubernetes Autoscaling with Artificial Intelligence

Authors

  • Olha Tkhai National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"
  • Nataliia Shapoval National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute,” https://orcid.org/0000-0002-8509-6886

DOI:

https://doi.org/10.18372/1990-5548.84.20186

Keywords:

autoscaling, Kubernets, Kolmogorov–Arnold network, Fourier analysis network, transfomer, time-series forecasting, long short-term memory

Abstract

This study explores how to improve Kubernetes auto-scaling using artificial intelligence based forecasting. The authors emphasize the limitations of traditional, reactive auto-scaling methods that lag behind rapid changes in demand and propose a proactive approach that predicts future resource requirements. The paper presents a framework for integrating artificial intelligence based predictions into the Kubernetes ecosystem to improve operational efficiency and resource utilization. To address the main challenges, the authors focus on improving workload forecasting and mitigating the impact of random fluctuations in Kubernetes performance. To address this issue, they use time-series forecasting models combined with data preprocessing techniques to predict future CPU utilization and thus inform scaling decisions before peaks or troughs in demand occur. The results show that artificial intelligence based forecasting can significantly improve scaling accuracy, reduce latency, and optimize resource utilization in Kubernetes environments. Time-series models are developed and evaluated using real CPU utilization data from a Kubernetes cluster, including RNN, LSTM, and CNN-GRU. The study also explores new architectures such as Fourier Analysis Network and Kolmogorov–Arnold Network and their integration with the transformer model. In general, the proposed approach aims to improve resource efficiency and application reliability in Kubernetes through proactive automatic scaling.

Author Biographies

Olha Tkhai, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"

Bachelor's degree student

Department of Artificial Intelligence

Institute of Applied Systems Analysis 

Nataliia Shapoval , National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute,”

Candidate of Sciences (Engineering)

References

AI-Powered Predictive Scaling in Kubernetes: Reducing Cloud Costs While Maintaining High Availability. URL: https://dev.to/sarthakkarora/ai-powered-predictive-scaling-in-kubernetes-reducing-cloud-costs-while-maintaining-high-4ah0.

Autoscaling Workloads. URL: https://kubernetes.io/docs/concepts/workloads/autoscaling/.

Autoscaling in Kubernetes: Why doesn’t the Horizontal Pod Autoscaler work for me? URL: https://medium.com/expedia-group-tech/autoscaling-in-kubernetes-why-doesnt-the-horizontal-pod-autoscaler-work-for-me-5f0094694054.

Saibot. GitHub - saibot94/cpu-dataset-prometheus: Anonymized CPU usage dataset. GitHub. https://github.com/saibot94/cpu-dataset-prometheus.

A Angel, (2024). Denoising data with Fast Fourier Transform - using Python. https://medium.com/@angelAjcabul/denoising-data-with-fast-fourier-transform-277bc84e84a4.

Gábor Petneházi: Recurrent Neural Networks for Time Series Forecasting. arXiv preprint arXiv:1901.00069, 2019.

L. Nashold and R. Krishnan, (2020). Using LSTM and SARIMA Models to Forecast Cluster CPU Usage. https://cs229.stanford.edu/proj2020spr/report/Nashold_Krishnan.pdf.

Sharmasaravanan, (2024). Time-Series Forecasting Using GRU: A Step-by-Step Guide. https://sharmasaravanan.medium.com/time-series-forecasting-using-gru-a-step-by-step-guide-b537dc8dcfba.

Ghani Rizky Naufal and Antoni Wibowo, (2023). Time Series Forecasting Based on Deep Learning CNN-LSTM-GRU Model on Stock Prices. https://doi.org/10.14445/22315381/IJETT-V71I6P215

Yihong Dong, Ge Li, Yongding Tao, Xue Jiang, Kechi Zhang, Jia Li, Jinliang Deng, Jing Su, Jun Zhang, and Jingjing Xu, FAN: Fourier Analysis Networks. arXiv preprint arXiv:2410.02675, 2024.

Sam Jeong and Hae Yong Kim, Convolutional Fourier Analysis Network (CFAN): A Unified Time-Frequency Approach for ECG Classification. arXiv preprint arXiv:2502.00497, 2025.

Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljačić, Thomas Y. Hou, and Max Tegmark, KAN: Kolmogorov-Arnold Networks. arXiv preprint arXiv:2404.19756, 2024.

Kim C. Raath, Katherine B. Ensor, Alena Crivello, and David W. Scott, (2023). Denoising Non-Stationary Signals via Dynamic Multivariate Complex Wavelet Thresholding. https://doi.org/10.3390/e25111546

Mina Kemiha, Empirical mode decomposition and normalshrink tresholding for speech denoising. arXiv preprint arXiv:1405.7895, 2014. https://doi.org/10.5121/ijit.2014.3203

Bingze Dai and Wen Bai, Denoising ECG by Adaptive Filter with Empirical Mode Decomposition. arXiv preprint arXiv:2108.08376, 2021.

Kunpeng Xu, Lifei Chen, and Shengrui Wang, Kolmogorov-Arnold Networks for Time Series: Bridging Predictive Power and Interpretability. arXiv preprint arXiv:2406.02496, 2024.

Xiao Han, Xinfeng Zhang, Yiling Wu, Zhenduo Zhang, and Zhe Wu, Are KANs Effective for Multivariate Time Series Forecasting? arXiv preprint arXiv:2408.11306, 2024.

Xingyi Yang and Xinchao Wang, Kolmogorov-arnold transformer. arXiv preprint arXiv:2409.10594, 2024.

Downloads

Published

2025-06-28

Issue

Section

COMPUTER SCIENCES AND INFORMATION TECHNOLOGIES