Skip to content

markusroellig/pdr-benchmark

Repository files navigation

pdr-benchmark

Future-proof storage and analysis tools for the Röllig et al. (2007) PDR benchmark

License: MIT Python 3.8+

A comprehensive Python package for working with Photon-Dominated Region (PDR) benchmark data from Röllig et al. (2007). This package provides standardized tools for loading, comparing, and visualizing PDR model outputs across different codes.


🎯 Key Features

  • Complete benchmark dataset - All data from 8 benchmark models (F1-F4, V1-V4) and 8 PDR codes
  • Standardized interface - PDRModelOutput class works with any PDR code
  • Ensemble statistics - Compute median and spread across multiple codes
  • Publication-quality plots - Ready-to-use visualization routines
  • Extensible - Easy to add your own PDR code via custom loaders
  • Well-documented - Comprehensive examples and API reference

📦 Installation

# From PyPI (when published)
pip install pdr-benchmark

# From source
git clone https://github.com/YOUR_USERNAME/pdr-benchmark.git
cd pdr-benchmark
pip install -e .

🚀 Quick Start

Load benchmark data

from pdr_benchmark import load_benchmark_data, load_benchmark_ensemble

# Load specific code for a model
kosma_f1 = load_benchmark_data(model='F1', code='KOSMA')

# Load all codes for ensemble comparison
ensemble_f1 = load_benchmark_ensemble(model='F1')

Compare your code against benchmark

from pdr_benchmark import compare_to_benchmark, plot_comparison

# Load your PDR code output (example with KOSMA-tau)
from pdr_benchmark.loaders.kosma_tau import load_kosma_tau_output
my_output = load_kosma_tau_output('path/to/output')

# Compare against benchmark ensemble
metrics = compare_to_benchmark(my_output, ensemble_f1, 'densities.H2')
print(f"Mean deviation: {metrics.mean_deviation:.1%}")
print(f"Within ensemble spread: {metrics.within_spread:.1%}")

# Plot comparison
fig = plot_comparison(my_output, ensemble_f1, 'densities.H2', model='F1')
fig.savefig('h2_comparison.png')

Compute ensemble statistics

from pdr_benchmark import compute_ensemble_statistics
import numpy as np

# Define common grid
av_grid = np.logspace(-4, 1.5, 200)

# Compute ensemble median and spread
stats = compute_ensemble_statistics(
    ensemble_f1,
    'densities.H2',
    av_grid
)

# Access results
median = stats.median
p16 = stats.p16  # 16th percentile
p84 = stats.p84  # 84th percentile

📊 Benchmark Models

The benchmark consists of 8 models testing different physical conditions:

Fixed Temperature Models (F1-F4)

Model Density [cm⁻³] χ (ISRF) T_gas [K] T_dust [K]
F1 3.16 × 10² 10 50 20
F2 3.16 × 10² 10⁵ 50 20
F3 3.16 × 10⁵ 10 50 20
F4 3.16 × 10⁵ 10⁵ 50 20

Variable Temperature Models (V1-V4)

Model Density [cm⁻³] χ (ISRF) Temperature
V1 3.16 × 10² 10 Calculated
V2 3.16 × 10² 10⁵ Calculated
V3 3.16 × 10⁵ 10 Calculated
V4 3.16 × 10⁵ 10⁵ Calculated

All models use standardized chemistry, dust properties, and radiation fields (see docs/benchmark_models.md).


🔬 Available Data

Participating Codes (8)

  • Bensch (Bensch et al.)
  • CLOUDY (Ferland et al.)
  • HTBKW (Hollenbach, Tielens, Brauher, Kaufman, Werner)
  • KOSMA-τ (Röllig et al.)
  • Leiden (Leiden PDR code)
  • Meudon (Le Petit et al.)
  • Sternberg (Sternberg & Dalgarno)
  • UCL_PDR (Bell et al.)

Quantities Available

  • Densities - H, H₂, C⁺, C, CO, CH, OH, O, etc.
  • Column densities - Integrated abundances
  • Temperatures - Gas and dust (V models only)
  • Photorates - H₂ dissociation, CO dissociation, C ionization
  • Heating rates - Photoelectric, cosmic ray, chemical, etc.
  • Cooling rates - [CII], [OI], CO rotation, H₂ vibration, etc.
  • Surface brightness - [CII] 158μm, [OI] 63/146μm, [CI] 370/610μm

🎨 Visualization Examples

Density comparison across codes

from pdr_benchmark import plot_quantity_comparison

# Compare H2 density across all codes for model F1
fig = plot_quantity_comparison(
    my_output,
    ensemble_f1,
    'densities.H2',
    model_name='F1',
    highlight_code_name='KOSMA-tau'
)

Multi-panel comparison

from pdr_benchmark import plot_multi_quantity_comparison

# Compare multiple species at once
quantities = ['densities.H2', 'densities.CO',
              'densities.C+', 'densities.C']
fig = plot_multi_quantity_comparison(
    my_output,
    ensemble_f1,
    quantities,
    model_name='F1',
    layout=(2, 2)
)

🛠️ Adding Your Own PDR Code

To compare your PDR code against the benchmark, write a loader function:

from pdr_benchmark import PDRModelOutput
import numpy as np

def load_my_code_output(output_file):
    """Load your code's output into standard format."""

    # Read your code's output (format-specific)
    data = read_my_format(output_file)

    # Map to standard PDRModelOutput
    return PDRModelOutput(
        depth=data['depth'],
        av=data['av'],
        n_points=len(data['depth']),
        densities={
            'H2': data['n_H2'],
            'CO': data['n_CO'],
            # ... map all your species
        },
        gas_temperature=data['T_gas'],
        heating_rates={
            'photoelectric': data['PE_heating'],
            # ... map your heating processes
        },
        model_name="MyCode",
        code_version="1.0"
    )

See docs/how_to_add_code.md for detailed instructions.


📚 Documentation


📁 Data Files

Processed Data (JSON)

Complete extracted data for all codes and models:

  • benchmark_data/json/F1/ through benchmark_data/json/V4/
  • 83 JSON files with standardized structure
  • Includes all quantities (densities, temperatures, heating/cooling, etc.)

Raw Data

Original data files from benchmark website:

Mathematica Package

Original benchmark data package:

  • benchmark_data/mathematica/PDRBenchmarkTools.m (13 MB)
  • Complete dataset used in Röllig et al. (2007)

See benchmark_data/README.md for details.


🧪 Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=pdr_benchmark --cov-report=html

🤝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

See CONTRIBUTING.md for guidelines.


📖 Citation

If you use this package in your research, please cite:

Benchmark paper:

@ARTICLE{2007A&A...467..187R,
       author = {{R{\"o}llig}, M. and {Abel}, N.~P. and {Bell}, T. and {Bensch}, F. and {Black}, J. and {Ferland}, G.~J. and {Jonkheid}, B. and {Kamp}, I. and {Kaufman}, M.~J. and {Le Bourlot}, J. and {Le Petit}, F. and {Meijerink}, R. and {Morata}, O. and {Ossenkopf}, V. and {Roueff}, E. and {Shaw}, G. and {Spaans}, M. and {Sternberg}, A. and {Stutzki}, J. and {Tielens}, A.~G.~G.~M. and {van Dishoeck}, E.~F. and {van der Werf}, P.~P. and {Wyrowski}, F.},
        title = "{A photon dominated region code comparison study}",
      journal = {Astronomy \& Astrophysics},
     keywords = {astrochemistry, methods: numerical, ISM: clouds, infrared: ISM, radio lines: ISM, Astrophysics},
         year = 2007,
        month = may,
       volume = {467},
       number = {1},
        pages = {187-206},
          doi = {10.1051/0004-6361:20065918},
archivePrefix = {arXiv},
       eprint = {astro-ph/0702230},
 primaryClass = {astro-ph},
       adsurl = {https://ui.adsabs.harvard.edu/abs/2007A&A...467..187R},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

This package:

@software{pdr_benchmark,
  author = {[Your Name]},
  title = {pdr-benchmark: Tools for PDR Benchmark Data},
  year = {2026},
  url = {https://github.com/YOUR_USERNAME/pdr-benchmark}
}

📄 License

MIT License - see LICENSE for details.


🔗 Links


🙏 Acknowledgments

This package builds on the community effort of the PDR benchmark comparison project:

  • M. Röllig et al. (2007) for organizing the benchmark
  • All participating code teams for providing their data
  • The PDR community for continued support

Questions or issues? Open an issue on GitHub or contact [your contact info]

About

Tools for PDR Benchmark Data Analysis (Röllig et al. 2007)

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors