The project aims to build a repository of systems
that implement
effect handlers, benchmarks
implemented in those systems, and scripts to
build the systems, run the benchmarks, and produce the results.
A system
may either be a programming language that has native support for
effect handlers, or a library that embeds effect handlers in another programming
language.
Ensure that Docker is installed on your system. Then,
$ make bench_ocaml
runs the OCaml benchmarks and produces benchmarks/ocaml/results.csv
which
contains the results of running the Multicore OCaml benchmarks.
System | Availability |
---|---|
Eff | |
Effekt | |
Handlers in Action | |
Koka | |
libhandler | |
libmpeff | |
libseff | |
Links | |
Multicore OCaml |
Eff | Effekt | Handlers in Action | Koka | Multicore OCaml | Libseff | Libmpeff | |
---|---|---|---|---|---|---|---|
Countdown | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Fibonacci Recursive | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Product Early | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Iterator | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Nqueens | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ➖ | ❌ |
Generator | ✔️ | ❌ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Tree explore | ✔️ | ✔️ | ❌ | ✔️ | ✔️ | ➖ | ❌ |
Triples | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ➖ | ❌ |
Parsing Dollars | ✔️ | ✔️ | ❌ | ✔️ | ✔️ | ✔️ | ✔️ |
Resume Nontail | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Handler Sieve | ✔️ | ❌ | ❌ | ✔️ | ✔️ | ✔️ | ✔️ |
Legend:
- ✔️ : Benchmark is implemented
- ❌ : Benchmark is not implemented
- ➖ : Benchmark is unsuitable for this system, and there is no sense in implementing it (eg. benchmarking the speed of file transfer in a language that does not support networking)
systems/<system_name>/Dockerfile
is theDockerfile
in order to build the system.benchmarks/<system_name>/<benchmark_name>/
contains the source for the benchmark<benchmark_name>
for the system<system_name>
.descriptions/<benchmark_name>/
contains the description of the benchmark, the input and outputs, and any special considerations.Makefile
is used to build the systems and benchmarks, and run the benchmarks. For eachsystem
, the Makefile has the following rules:system_<system_name>
: Builds the<system_name>
docker image.bench_<system_name>
: Runs the benchmarks using the docker image for<system_name>
.test_<system_name>
: Tests the benchmark programs using the docker image for<system_name>
.
LABELS.md
contains a list of available benchmark labels. Each benchmark can be assigned multiple labels.
The role of the benchmarking chairs is to curate the repository, monitor the quality of benchmarks, and to solicit new benchmarks and fixes to existing benchmarks. Each benchmarking chair serves two consecutive terms. Each term is 6 months.
The current co-chairs are
- Cong Ma (2024/11/11 - 2025/05/11 - 2025/11/11)
- Jesse Sigal (2023/09/27 - 2024/03/27 - 2024/09/27)
Past co-chairs
- Philipp Schuster (2022/09/21 - 2023/03/21 - 2024/11/11)
- Filip Koprivec (2022/01/21 - 2022/07/22 - 2023/03/21)
- Daniel Hillerström (Inaugural chair, 2021/07/23 - 2022/01/22 - 2022/09/20)
If you wish to implement <goat_benchmark>
for system <awesome_system>
,
- Add the benchmark sources under
benchmarks/<awesome_system>/<goat_benchmark>
. The benchmark takes its input as a command-line argument and prints its outputs. - Update
benchmarks/<awesome_system>/Makefile
to build, test, and benchmark the program. Use the parameters for testing and benchmarking provided indescriptions/<goat_benchmark>/README.md
. - Update this
README.md
file to tick the new benchmark in the benchmark availability table.
If you wish to add a new benchmark <goat_benchmark>
,
- Add a benchmark description under
descriptions/<goat_benchmark>/README.md
. Use the template provided indescriptions/template/README.md
. - Provide a reference implementation for at least one system.
- Update this
README.md
and add a new row to the benchmark availability table.
If you wish to contribute a system <awesome_system>
,
- Add a new dockerfile at
systems/awesome_system/Dockerfile
. - Add a new workflow under
.github/workflows/system_<awesome_system>.yml
. It should build the system and run tests. - Update this
README.md
and add a new column to the benchmark availability table. Create a status badge and add it as well. - Update
Makefile
with commands that build, test, and benchmark the system.
Ideally, you will also add benchmarks to go with the new system.
Having a dockerfile aids reproducibility and ensures that we can build the system from scratch natively on a machine if needed. The benchmarking chair will push the image to Docker Hub so that systems are easily available for wider use.
We use Ubuntu 22.04 as the base image for building the systems and hyperfine to run the benchmarks.
We curate software artifacts from papers related to effect
handlers. If you wish to contribute your artifacts, then please place
your artifact as-is under a suitable directory in artifacts/
.
There is no review process for artifacts (other than that they must be related to work on effect handlers). Whilst we do not enforce any standards on artifacts, we do recommend that artifacts conform with the artifacts evaluation packaging guidelines used by various programming language conferences.