This repository contains a local TLS 1.3 laboratory for studying post-quantum migration strategies in certificate hierarchies and key exchange.
The project focuses on practical comparisons between different combinations of:
- ML-DSA and SLH-DSA certificate hierarchies
- depth-2 and depth-3 PKI topologies
- classical, hybrid, and pure-PQC TLS groups
The repository includes the code and automation required to:
- generate certificate chains from declarative profiles
- launch OpenSSL
s_serverinstances with the OQS provider - run benchmark campaigns with a custom C client
- collect per-handshake metrics, PCAP traces, and
perfdata
The data analysis notebooks, final paper, and short report are intentionally not included here. This repository is the experimental and reproducible execution layer of the work.
The lab is designed to study questions such as:
- What is the practical impact of ML-DSA vs SLH-DSA in TLS server authentication?
- How does the position of each algorithm in the certificate hierarchy affect the handshake?
- What changes when moving from depth-2 to depth-3 chains?
- How do classical, hybrid, and pure-PQC key exchange groups compare?
- Which migration strategies appear operationally plausible, and which look too expensive?
bench/
TLS benchmarking client in C
oqs-provider/
Local OpenSSL + oqsprovider build
pki_factory/
PKI generation scripts and generated chain material
profiles/
Declarative JSON profiles for:
- chains
- scenarios
- campaign lists
runner/
Scripts to launch one scenario or a full campaign list
results/
Per-scenario outputs:
- bench CSV
- perf CSV
- server logs
- PCAP traces
- meta JSON
analysis/
Optional helper scripts for summarizing runs
Certificate chains are generated from JSON chain profiles stored in:
profiles/chains/
Supported topologies:
- depth-2:
root -> leaf - depth-3:
root -> intermediate -> leaf
Generated output is written to:
pki_factory/output/<profile_id>/
Each generated profile includes:
- certificates and keys
- CSRs
chain.jsonmeta.jsonverify.txt
The benchmark client is implemented in:
bench/tls_bench_client.c
It opens a fresh TCP connection for each run and records per-handshake metrics such as:
- elapsed time
- success/failure
- TLS BIO byte counters
- observed peer chain length
- observed peer chain size in DER bytes
Scenarios are defined in:
profiles/scenarios/
Each scenario specifies:
- TLS group
- chain profile
- host / port / SNI
- number of runs
- whether to capture PCAP
- whether to collect
perfdata
The main execution script is:
./runner/run_scenario.sh <scenario.json>Campaign lists are defined in:
profiles/lists/
and can be executed with:
./runner/run_scenario_list.sh <listfile>Typical requirements are:
- Linux
- GCC
jqperftcpdump- a local OpenSSL build with
oqsprovider
The repository assumes that the local OpenSSL binary is available at:
oqs-provider/.local/bin/openssl
Compile the benchmark client with:
gcc -O2 -Wall -Ioqs-provider/.local/include bench/tls_bench_client.c \
-o bench/tls_bench_client \
-Loqs-provider/.local/lib64 \
-Wl,-rpath,"$PWD/oqs-provider/.local/lib64" \
-lssl -lcryptoExample:
./pki_factory/scripts/gen_chain.sh profiles/chains/slh_root__ml_int__ml_leaf.jsonThis generates all certificate material for that profile under:
pki_factory/output/slh_root__ml_int__ml_leaf/
Example:
./runner/run_scenario.sh profiles/scenarios/x25519mlkem768__slh_root__ml_int__ml_leaf.jsonYou can override the number of runs at execution time:
RUNS_OVERRIDE=50 ./runner/run_scenario.sh \
profiles/scenarios/x25519mlkem768__slh_root__ml_int__ml_leaf.jsonYou can also force server-side perf collection:
RUNS_OVERRIDE=50 CAPTURE_PERF_SERVER_OVERRIDE=true \
./runner/run_scenario.sh \
profiles/scenarios/x25519mlkem768__slh_root__ml_int__ml_leaf.jsonExample:
RUNS_OVERRIDE=50 ./runner/run_scenario_list.sh profiles/lists/campaign_A.listAvailable campaign lists include:
campaign_A.listcampaign_B_core.listcampaign_B_full.listcampaign_C.listcampaign_D.list
Each scenario execution creates a timestamped directory under results/ containing files such as:
bench_<scenario>.csvperf_client_<scenario>.csvperf_server_<scenario>.csvserver_<scenario>.logpcap_<scenario>.pcapmeta_<scenario>.json
These outputs are intended to support later statistical analysis and reporting.
- Generate PKI profiles
- Build the benchmark client
- Run selected scenarios or campaign lists
- Collect CSV, PCAP, and
perfoutputs - Analyze results outside this repository
MIT License.