实验室论文被 ICDCS 2023录用

发布者:邓玉辉发布时间:2023-04-12浏览次数:395


实验室博士生吴朝锐,邓玉辉老师等人联合撰写的论文《 FaaSBatch: Enhancing the Efficiency of Serverless Computing by Batching and Expanding the Functions》被the 43rd IEEE International Conference on Distributed Computing Systems(ICDCS2023)录用。ICDCS为计算机系统结构领域权威会议,为中国计算机学会推荐B类会议,会议将于2023年在香港正式召开。


论文摘要如下:

With high scalability and flexibility, serverless com-puting is becoming the most promising computing model. Exist-ing serverless computing platforms initiate a container for each function invocation, which leads to a huge waste of computing resources. Our examinations reveal that (i) executing invocations concurrently within a single container can provide comparable performance to that provided by multiple containers (i.e., tradi-tional approaches); (ii) redundant resources generated within a container result in memory resource waste, which prolongs the execution time of function invocations. Motivated by these in-sightful observations, we propose FaaSBatch - a serverless frame-work that reduces invocation latency and saves scarce computing resources. In particular, FaaSBatch first classifies concurrent function requests into different function groups according to the invocation information. Next, FaaSBatch batches the invocations of each group, aiming to minimize resource utilization. Then, FaaSBatch utilizes an inline parallel policy to map each group of batched invocations into a single container. Finally, FaaSBatch expands and executes invocations of containers in parallel. To further reduce invocation latency and resource utilization, within each container, FaaSBatch reuses redundant resources created during function execution. We conduct extensive experiments based on Azure traces to evaluate the effectiveness and perfor-mance of FaaSBatch. We compare FaaSBatch with three state-of-the-art schedulers Vanilla, SFS, and Kraken. Our experimental results show that FaaSBatch effectively and remarkably slashes invocation latency and resource overhead. For instance, when executing I/O functions, FaaSBatch cuts back the invocation latency of Vanilla, SFS, and Kraken by up to 92.18%, 89.54%, and 90.65%, respectively; FaaSBatch also slashes the resource overhead of Vanilla, SFS, and Kraken by 58.89% to 94.77%, 43.72% to 90.39%, and 42.99% to 78.88%, respectively.