Multi-Cloud FaaS Platforms
Functions as a Service (FaaS) have gained popularity for programming public clouds due to their simple abstraction, ease of deployment, effortless scaling and granular billing. Cloud providers also offer basic capabilities to compose these functions into workflows. FaaS and FaaS workflow models, however, are proprietary to each cloud provider. This prevents their portability across cloud providers, and requires effort to design workflows that run on different cloud providers or data centers. Such requirements are increasingly important to meet regulatory requirements, leverage cost arbitrage and avoid vendor lock-in. Further, the FaaS execution models are also different, and the overheads of FaaS workflows due to message indirection and cold-starts need custom optimizations for different platforms. In this work, we propose XFaaS (pronounced as Cross FaaS)[eScience 2022, CCGRID 2023], a cross-platform deployment and orchestration engine for FaaS workflows to operate on multiple clouds. XFaaS allows “zero touch” deployment of functions and workflows across AWS and Azure clouds by automatically generating the necessary code wrappers, cloud queues, and coordinating with the native FaaS engine of the cloud providers. It also uses intelligent function fusion and placement logic to reduce the workflow execution latency in a hybrid cloud while mitigating costs, using performance and billing models specific to the providers based in detailed benchmarks. Our empirical results indicate that fusion offers up to ~75% benefits in latency and ~57% reduction in cost, while placement strategies reduce the latency by ~24%, compared to baselines in the best cases.
Publications
- [eScience 2022] Aakash Khochare, Yogesh Simmhan, Sameep Mehta, Arvind Agarwal, Toward Scientific Workflows in a Serverless World, IEEE International Conference on e-Science (e-Science) Poster 2022
- [CCGRID 2023] Aakash Khochare, Tuhin Khare, Varad Kulkarni and Yogesh Simmhan, XFaaS: Cross-platform Orchestration of FaaS Workflows on Hybrid Clouds, IEEE/ACM CCGRid 2023
Team
Platforms and Scheduling of Quantum+Cloud Computing
Quantum computers are increasingly seen as the next step in the evolution of computing hardware capacity. Quantum devices are being exposed through the same familiar cloud platforms used for classical computers, and enabling hybrid applications that combine quantum and classical components to execute and interact seamlessly. Similar to classical compute resources with different capacities, the features of quantum computers also vary: in the number of qubits, quantum volume, and CLOPS. They also vary in the noise profile, cost, and queuing delays to access these resources. In a recent study, we profile the use of two workload splitting techniques on IBM’s Quantum Cloud: (1) Circuit parallelization for splitting a large circuit into smaller ones, and (2) Data parallelization to split large number of circuits executed on one hardware to smaller batches of circuits on different IBM devices. In the current iteration in this track our work tries to evaluate their impact on circuit execution times, pre- and post-processing overhead, and quality of the result, relative to a baseline without any splitting. The results are obtained from real hardware measurements using the open-source Qiskit SDK, and complemented by simulations.
Publications
- [CCGRID 2023 Early Student Career Showcase] Rajiv Sangle, Tuhin Khare, Padmanabha V. Seshadri and Yogesh Simmhan, Comparing the Orchestration of Quantum Applications on Hybrid Clouds
Team
DAG scheduling using GNN
Scheduling task graphs on distributed heterogeneous complex cluster settings is an NP-hard problem. Most existing solutions either assume a homogeneous environment or propose heuristic algorithms that take relatively high computing time. Upcoming algorithms in this domain are leaning towards neural-network-based approaches to address this problem. As task graphs can be represented in the form of DAGs, Graph Neural Network based architectures are appropriate to capture the inter-dependencies between the tasks. Through our work, we are trying to train a Graph Neural Network which would learn the mappings between task and executors and then given a new graph would generate the optimal schedule.