Skip to content

spcl/cppless-artifact

Repository files navigation

Cppless: Single-Source and High-Performance Serverless Programming in C++

The repository contains the replication artifact for the paper "Cppless: Single-Source and High-Performance Serverless Programming in C++", accepted at the ACM TACO. All data was obtained in July 2024 (local and Lambda runs) and March 2025 (binary sizes of packages). We executed all benchmarks from a m5.4xlarge virtual machine in eu-central-1 region of AWS cloud.

Invoking AWS Benchmarks

(1) To run the benchmarks, you need to have an active AWS subscription.

(2) You need to create a Lambda role that specifies function permissions.

(3) Define the following environment variables:

AWS_REGION=your-aws-region
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_FUNCTION_ROLE_ARN=lambda-role-arn

(4) After that, you can build the benchmarks in the cppless repository. Follow the instructions provided in the repository.

(5) To query cloud logs, it is often insufficient to run a direct query on CloudWatch. We have found out that it often does not find all function invocations. Instead, we recommend using the AWS CLI or web interface to export logs of a function to an S3 bucket, where you specify a prefix inside the bucket to store all function logs. Then, you can use the provided scripts in tool/log_query/ to extract the data from the bucket:

python3 tools/log_query/query_logs_s3.py <bucket-name> <prefix-inside-bucket> <place-to-store-output>

Microbenchmarks (Section 5.1)

Serialization

Run the benchmark with:

benchmarks/micro/serialization/run_serialization.py <cppless-build> <cppless-artifact>/data

The provided data and Jupyter notebook analysis/microbenchmarks/serialization.ipynb reproduce the Figure 11.

AWS Lambda Client

The provided data and Jupyter notebook analysis/microbenchmarks/invocations.ipynb reproduce the Figure 12.

One VM

Edit benchmarks/micro/invocations/run.sh and change the function name based on your deployment; the name is needed to update memory size of the function. Change also the build directory of cppless.

Then execute the benchmark for all configurations:

benchmarks/micro/invocations/run.sh

Multiple VMs (MPI)

To execute the MPI benchmark, you first need to build it:

LDFLAGS="-lmpi -lmpi_cxx" MPI_ROOT=/usr/lib/x86_64-linux-gnu/openmpi/ cmake  -DCPPLESS_TOOLCHAIN=native -DBUILD_EXAMPLES=OFF -DBUILD_BENCHMARKS=ON -DCMAKE_BUILD_TYPE=Release -G "Unix Makefiles"  -DMPI_CXX_LIB_NAMES="mpi;mpi_cxx"  -DMPI_mpi_LIBRARY=/usr/lib/x86_64-linux-gnu/libmpi.so -DMPI_mpi_cxx_LIBRARY=/usr/lib/x86_64-linux-gnu/libmpi_cxx.so ../cppless

Then, change hostfile in benchmarks/micro/invocations with the IP addresses of your VMs. Then change benchmarks/micro/invocations/run_mpi.sh by inserting AWS account details. Finally, run the benchmark. For example, to invoke 128 functions each from 8 processes with 21 repetitions:

benchmarks/micro/invocations/run_mpi.sh <cppless-build>/benchmarks/custom/invocations/benchmark_custom_invocations_mpi 128 21 out.csv

Script run_mpi.py encodes all repetitions and combinations. Output files will be found in each VM separately.

Main Benchmarks

Monte Carlo Pi (Section 5.2)

This benchmark consists of four serverless components, and two local ones. Each one has an associated Bash script in benchmarks/pi/.

The provided data and Jupyter notebooks reproduce the following figures:

  • analysis/pi/weak_scaling.ipynb reproduce Figure 13.
  • analysis/pi/strong_scaling.ipynb reproduce Figures 14 and 15.
  • analysis/pi/memory_scalability.ipynb reproduce Figure 16.
  • analysis/pi/cold_startups.ipynb reproduce Figure 17.

Local & Lambda Baseline

For local and Lambda baselines, we provide additional build scripts. You will need AWS SDK for C++; we used the commit f067d450a8689f3ae05fbcd96039cdd9f2d0276c.

Cold Invocations

In cold_invocations, you will find scripts to deploy Python functions and execute both variants - Python and C++.

OpenMP BoTS Benchmark (Section 5.3)

The provided data and Jupyter notebook analysis/bots/nqueens.ipynb reproduce Figures 18 and 19.

Use the provided Python scripts to execute serverless and local benchmarks.

Ray Tracing (Section 5.4)

The provided data and Jupyter notebook analysis/ray/scaling.ipynb reproduce Figures 20 and 21.

Instructions for reproducing are provided in the benchmark. We used the data produced for the new benchmark version that had a custom tile size.

Code Size (Section 5.5)

Detailed analysis for data in Figure 22 can be found in analysis/binary_sizes.md.

Prototypes

ARM Target (Section 4.4)

The experimental cross-compilation to ARM target is provided in the main cppless implementation. To test it, create the sysroot using the instructions provided in cppless repository, and then build Cppless:

cmake -DCPPLESS_TOOLCHAIN=aarch64 -DBUILD_BENCHMARKS=ON -DBUILD_EXAMPLES=ON -G "Unix Makefiles" <cppless-source>

Rust Metaprogramming (Section 7)

We implemented a basic prototype of a Rust application that allows to conduct AST transformations at compilation time, and compile an application into different entrypoints without compiler modifications.

This simple library is in prototypes/rust-offloading.

Google Cloud Functions (Section 7.1)

Directory prototypes/gcloud-wasm contains the prototype demonstrating that C++ functions can be compiled to WASM with Clang, and later executed as Node.js Google Cloud Functions.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published