Network and computing infrastructures are nowadays challenged to meet the increasingly stringent requirements of novel applications. One of the most critical aspect is optimizing the latency perceived by the end-user accessing the services. New network architectures offer a natural framework for the efficient orchestration of microservices. However, how to incorporate accurate latency metrics into orchestration decisions still represents an open challenge.In this work we propose a novel architectural approach to perform scheduling operations in Kubernetes environment. Existing approaches proposed the collection of network metrics, e.g. latency between nodes in the cluster, via purposely-built external measurement services deployed in the cluster. Compared to other approaches the proposed one: (i) collects performance metrics at the application layer instead of network layer; (ii) relies on latency measurements performed inside the service of interest instead of utilizing external measurement services; (iii) takes scheduling decisions based on effective end-user perceived latency instead of considering the latency between cluster nodes.We show the effectiveness of our approach by adopting an iterative discovery strategy able to dynamically determine which node operates with the lowest latency for the Kubernetes pod placement.

Latency-Aware Kubernetes Scheduling for Microservices Orchestration at the Edge

Centofanti C.;Tiberti W.;Marotta A.;Graziosi F.;Cassioli D.
2023-01-01

Abstract

Network and computing infrastructures are nowadays challenged to meet the increasingly stringent requirements of novel applications. One of the most critical aspect is optimizing the latency perceived by the end-user accessing the services. New network architectures offer a natural framework for the efficient orchestration of microservices. However, how to incorporate accurate latency metrics into orchestration decisions still represents an open challenge.In this work we propose a novel architectural approach to perform scheduling operations in Kubernetes environment. Existing approaches proposed the collection of network metrics, e.g. latency between nodes in the cluster, via purposely-built external measurement services deployed in the cluster. Compared to other approaches the proposed one: (i) collects performance metrics at the application layer instead of network layer; (ii) relies on latency measurements performed inside the service of interest instead of utilizing external measurement services; (iii) takes scheduling decisions based on effective end-user perceived latency instead of considering the latency between cluster nodes.We show the effectiveness of our approach by adopting an iterative discovery strategy able to dynamically determine which node operates with the lowest latency for the Kubernetes pod placement.
2023
979-8-3503-9980-6
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/225122
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact