pq-showdown is a comprehensive benchmarking framework for comparing different priority queue implementations in the context of Dijkstra's shortest path algorithm on grid graphs. The project evaluates the performance of four priority queue variants:
- Binary Heap: A classic array-based binary min-heap
- Pairing Heap: A self-adjusting heap with amortized efficient decrease-key operations
- Fibonacci Heap: Theoretically optimal heap with O(1) amortized decrease-key
- Hollow Heap: A recent variant combining simplicity with strong theoretical guarantees
The benchmarking suite tests these data structures across various graph scenarios with different characteristics:
- Grid layouts: Varying sizes (512×512 to 1024×1024)
- Edge patterns: 4-neighborhood vs 8-neighborhood (diagonal), torus wrapping
- Edge weight distributions: Low variance, medium variance, high variance, dynamic ranges
- Graph complexity: Obstacles, heavy penalties
Each priority queue is tested in two modes:
- True decrease-key: Uses explicit decrease-key operations when finding shorter paths
- Lazy deletion: Inserts duplicate entries and filters stale ones during extraction
The framework captures detailed metrics including runtime, memory usage, operation counts (insert, extract-min, decrease-key), and produces visualization-ready data for analysis.
pq-showdown/
├── src/pqshowdown/ # Main package source code
│ ├── algo/ # Algorithm implementations (Dijkstra)
│ ├── bench/ # Benchmarking infrastructure
│ ├── graph/ # Graph generation and CSR representation
│ ├── pq/ # Priority queue implementations
│ └── viz/ # Visualization and plotting utilities
├── scripts/ # Entry point scripts
│ └── bench.py # Main benchmarking script
├── tests/ # Unit tests
├── results/ # Benchmark results (JSONL files)
├── docs/figures/ # Generated plots and aggregated data
├── pyproject.toml # Poetry project configuration
├── run_pqshowdown.bat # Windows batch script for full benchmark suite
└── README.md # This file
- Python: Version 3.12 to 3.14
- Poetry: Python dependency management tool (installation guide)
- Operating System: Windows, macOS, or Linux
-
Clone the repository (if not already done):
git clone https://github.com/iamshahd/pq-showdown.git cd pq-showdown
-
Install Poetry (if not already installed):
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
Add Poetry to your PATH if needed, then verify installation:
poetry --version -
Install project dependencies:
poetry install
This creates a virtual environment and installs all required packages:
numpy(^2.3.4) - Numerical computingpandas(^2.3.3) - Data manipulationmatplotlib(^3.10.7) - Plotting and visualizationpsutil(^7.1.1) - System and process utilities for memory profiling
Development dependencies:
pytest- Testing frameworkblack- Code formatterpre-commit- Git hooks for code quality
-
Verify installation by running tests:
poetry run pytest
Run a minimal test to verify everything works:
poetry run showdown --pq binary --mode true --scenario grid-midvar --rows 60 --cols 60 --sources 3 --seed 1 --out results/smoke.jsonlTo run the complete benchmark suite across all priority queues, modes, and scenarios:
On Windows:
.\run_pqshowdown.batOn Linux/macOS: Create and run a shell script with similar logic, or run individual benchmarks manually.
Run a specific configuration:
poetry run showdown --pq <PQ_TYPE> --mode <MODE> --scenario <SCENARIO> --rows <ROWS> --cols <COLS> --sources <N> --seed <SEED> --out <OUTPUT_FILE>Parameters:
--pq: Priority queue type (binary,pairing,fibonacci,hollow)--mode: Decrease-key mode (truefor explicit decrease-key,lazyfor lazy deletion)--scenario: Graph scenario (grid-lowvar,grid-midvar,grid-heavy,grid-dynamic,grid-obstacles)--rows,--cols: Grid dimensions--sources: Number of random source nodes to run Dijkstra from--seed: Random seed for reproducibility--out: Output JSONL file path
Additional options:
--diag8: Enable 8-neighborhood (diagonal edges)--torus: Enable torus wrapping (edges wrap around grid boundaries)--obstacle-prob: Obstacle probability for grid-obstacles scenario (default: 0.15)--heavy-p: Heavy tail probability for grid-heavy scenario (default: 0.20)--heavy-hi: Heavy tail weight range as two integers (default: 50 200)--dyn-delta: Delta value for grid-dynamic scenario (default: 25)--dyn-frac: Fraction of cells in dynamic patch (default: 0.20)
Example Scenarios:
Low variance, 4-neighborhood:
poetry run showdown --pq fibonacci --mode true --scenario grid-lowvar --rows 512 --cols 512 --sources 10 --seed 37 --out results\test.jsonlMedium variance with 8-neighborhood and torus:
poetry run showdown --pq pairing --mode true --scenario grid-midvar --rows 512 --cols 512 --diag8 --torus --sources 10 --seed 37 --out results\midvar_8N.jsonlHeavy tail distribution:
poetry run showdown --pq binary --mode lazy --scenario grid-heavy --rows 512 --cols 512 --sources 10 --seed 37 --heavy-p 0.25 --heavy-hi 80 300 --out results\heavy.jsonlGrid with obstacles:
poetry run showdown --pq hollow --mode lazy --scenario grid-obstacles --rows 1024 --cols 1024 --sources 12 --seed 37 --obstacle-prob 0.15 --out results\obstacles.jsonlDynamic scenario (pre/post congestion):
poetry run showdown --pq fibonacci --mode true --scenario grid-dynamic --rows 512 --cols 512 --sources 10 --seed 37 --dyn-delta 30 --dyn-frac 0.25 --out results\dynamic.jsonlAfter running benchmarks, generate visualizations:
poetry run viz <RESULTS_DIR> --out <OUTPUT_DIR>Parameters:
<RESULTS_DIR>: Folder containing *.jsonl result files (e.g.,results\Wed-10-22_2025)--out: Output directory for figures (optional; default:<RESULTS_DIR>\figures)--scenario: Limit plots to a single scenario id (optional)
Example:
poetry run viz results\Wed-10-22_2025This creates:
- Per-scenario comparison plots (time bars, scatter plots, operation counts)
- Pre/post comparison plots for dynamic scenarios
- Scaling analysis plots (time vs graph size)
- Aggregated CSV files with summary statistics (
aggregate_per_scenario.csv,aggregate_scaling.csv)
Run tests:
poetry run pytestOr with verbose output:
poetry run pytest -vFormat code:
poetry run black .Activate the virtual environment:
poetry shellType checking and linting: The project includes pre-commit hooks. Install them with:
poetry run pre-commit installBenchmark results are saved as JSONL (JSON Lines) files, with each line containing:
- Configuration parameters: PQ type, mode, scenario, graph properties (rows, cols, diag8, torus, seed)
- Graph metrics: Number of nodes (n), number of edges (m), number of sources
- Performance metrics: Runtime (time_ms), memory usage (mem_bytes)
- Operation counts: Inserts (pushes), extracts (pops), decrease-keys (dec_keys), relaxations, stale skips (for lazy mode)
- Phase information: "static" for regular scenarios, "pre"/"post" for dynamic scenarios
- Validation metric: Sum of all shortest path distances (dist_sum)
The visualization tool (viz command) aggregates these results and generates:
- Bar charts comparing time by PQ type and mode
- Scatter plots of time vs decrease-key ratio
- Line plots for dynamic pre/post comparisons
- Scaling plots showing performance vs graph size
- CSV files with aggregated statistics