This repository hosts the experiments for the TLess project, a system design for confidential serverless workflows.
We implement TLess on top of two confidential FaaS runtimes representative of two points in the design space for confidential serverless:
- Faasm + SGX: a port of the Faasm to run WASM sandboxes inside SGX.
- CC-Knative: a port of the Knative runtime to run Knative services as container functions inside confidential VMs (AMD SEV).
To execute any code snippet in this repository, we will assume that you have activated your virtual environment:
source ./bin/workon.sh
Install rust
and rust-analyzer
. Then rustup component add rust-analyzer
.
# TODO: install this in the background
sudo apt install -y \
libfontconfig1-dev \
libssl-dev \
pkg-config \
TLess currently supports being deployed on top of two serverless runtimes, Faasm and Knative.
For instructions to deploy each one of them, see:
This repository implements four different workflows:
- FINRA - Based on the AWS FINRA case study.
- ML Training - Ported from Orion and RMMap.
- ML Inference - Ported from RMMap.
- Word Count - Ported from the MapReduce example in the FunctionBench paper.
Workflow\Baseline | Faasm | SGX-Faasm | TLess-Faasm | Knative | CC-Knative | TLess-Knative |
---|---|---|---|---|---|---|
FINRA | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ |
ML Training | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ |
ML Inference | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ |
Word Count | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ |
We run the following experiments:
- End-to-end latency: measures the end-to-end execution latency for each workflow.