About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Quality, Reliability, Security and Robustness in Heterogeneous Systems. 19th EAI International Conference, QShine 2023, Shenzhen, China, October 8 – 9, 2023, Proceedings, Part II

Research Article

Proactive Hybrid Autoscaling for Container-Based Edge Applications in Kubernetes

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-65123-6_24,
        author={Kaile Zhu and Shihao Shen and Shizhan Lan and Xiaofei Wang and Cheng Zhang and Chao Qiu and Victor Leung},
        title={Proactive Hybrid Autoscaling for Container-Based Edge Applications in Kubernetes},
        proceedings={Quality, Reliability, Security and Robustness in Heterogeneous Systems. 19th EAI International Conference, QShine 2023, Shenzhen, China, October 8 -- 9, 2023, Proceedings, Part II},
        proceedings_a={QSHINE PART 2},
        year={2024},
        month={8},
        keywords={Kubernetes Edge Computing Autoscaling},
        doi={10.1007/978-3-031-65123-6_24}
    }
    
  • Kaile Zhu
    Shihao Shen
    Shizhan Lan
    Xiaofei Wang
    Cheng Zhang
    Chao Qiu
    Victor Leung
    Year: 2024
    Proactive Hybrid Autoscaling for Container-Based Edge Applications in Kubernetes
    QSHINE PART 2
    Springer
    DOI: 10.1007/978-3-031-65123-6_24
Kaile Zhu1, Shihao Shen1, Shizhan Lan, Xiaofei Wang1,*, Cheng Zhang2, Chao Qiu1, Victor Leung3
  • 1: College of Intelligence and Computing
  • 2: Institute of Technology
  • 3: College of Computer Science and Software Engineering
*Contact email: xiaofeiwang@tju.edu.cn

Abstract

As the rising of the Internet of Things (IoT), edge computing is widely adopted in numerous applications. However, current autoscaling tools are not designed for edge applications and can not utilize the heterogeneous resources of edge nodes efficiently. In this paper, we propose a proactive hybrid autoscaler specifically optimized for edge computing scenario. With the Bidirectional Long Short Term Memory (Bi-LSTM) based load prediction model, the proposed autoscaler is able to predict the future workload and perform scaling operation before it arrives. In addition, a overload compensation algorithm is implemented to mitigate the Quality of Service (QoS) decreasing due to under-prediction. Then, a hybrid scaling method is applied to simultaneously modify the number of pods and their resource quotas without restarting. Experimental results with a real-world workload dataset shows the proposed load prediction model has better accuracy compared with the Long Short Term Memory model and the state-of-the-art statistical analysis model, Autoregressive Integrated Moving Average (ARIMA), which is also more than 350 times slower than our model in prediction speed. Finally, evaluation in a real Kubernetes cluster shows that the proposed proactive hybrid autoscaler outperforms the default Horizontal Pod Autoscaler (HPA) of Kubernetes in terms of both QoS and resource utilization efficiency.

Keywords
Kubernetes Edge Computing Autoscaling
Published
2024-08-20
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-65123-6_24
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL