Skip to content

IPL-UV/amlec_challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

License Language Task Domain Status



elias_logo eu_logo

Atmospheric Radiative Transfer Emulation Challenge (AMLEC)

Reference paper: Evaluating Machine Learning Emulators for Atmospheric Radiative Transfer: The AMLEC Challenge Authors: Jorge Vicent, Jasdeep Singh, Axel Rochel, Julio Contreras, Panagiotis Liatsis, Hasan Al Marzouqi, and Gustau Camps-Valls.

Overview & abstract

The Atmospheric Machine Learning Experiment Competition (AMLEC), organized within the EU ELIAS project, provided a benchmark for evaluating machine learning approaches to emulating atmospheric radiative transfer models.

Participants were tasked with predicting spectral data across two scenarios involving different input variables and spectral configuration:

  1. Scenario A: Atmospheric correction of hyperspectral satellite data.
  2. Scenario B: CO2 concentration retrieval.

Several training datasets, covering realistic input ranges with 500 to 10,000 samples, were used. Testing included interpolation and extrapolation to out-of-range conditions. Eight models were submitted, spanning neural networks and Gaussian processes with various configurations. Results showed that Gaussian process approaches achieved the lowest errors, indicating their suitability while highlighting the challenge of training complex neural network approaches with scarce data.

This repository serves as the permanent archive for the challenge data, the submission evaluation code, and the final benchmark results.


Benchmark results

Model MRE A1 (%) MRE A2 (%) MRE B1 (%) MRE B2 (%) Score Runtime Rank
Jasdeep_Emulator_3 0.090 3.117 0.566 6.108 1.525 89.359
Hugo2 0.144 2.868 0.610 5.033 2.300 5.382
rpnn1 0.133 5.883 0.583 5.561 2.525 19.082
rpgprv2 0.176 3.835 0.640 7.050 4.000 35.650
Jasdeep_Emulator_2 0.886 3.895 0.768 6.176 5.625 2.078
Krtek 0.545 7.693 0.823 7.877 6.500 0.764
rpcvae 0.185 11.996 0.918 15.313 6.700 0.546
Jobaman1 0.296 10.093 23.258 7.675 6.150
baseline 0.998 12.604 1.084 7.072 8.150 0.241

Introduction

Atmospheric Radiative Transfer Models (RTM) are crucial in Earth and climate sciences with applications such as synthetic scene generation, satellite data processing, or numerical weather forecasting. However, their increasing complexity results in a computational burden that limits direct use in operational settings.

RTM emulation is challenging due to the high-dimensional nature of both input (~10 dimensions) and output (several thousand) spaces, and the complex interactions of electromagnetic radiation with the atmosphere. This challenge contributes to reducing computational burdens in climate and atmospheric research, enabling faster satellite data processing and improved accuracy in atmospheric correction.

Challenge tasks and data

Proposed experiments

  1. Atmospheric correction (A): Focuses on reproducing key atmospheric transfer functions (path radiance, direct/diffuse solar irradiance, transmittance) for hyperspectral data (400-2500 nm).
  2. CO2 column retrieval (B): Focuses on predicting top-of-atmosphere radiance, particularly within the spectral range sensitive to CO2 absorption (2000-2100 nm).

Each scenario-track combination is identified by Sn, where S={A,B} and n={1,2} (1=Interpolation, 2=Extrapolation).

Data format

Training data is stored in HDF5 format with LUTdata (outputs) and LUTHeader (inputs). Testing input datasets are stored in .csv format.

Example loading in Python:

import h5py
import pandas as pd

# Load Training Data
with h5py.File('train2000.h5', 'r') as h5:
    Ytrain = h5['LUTdata'][:]
    Xtrain = h5['LUTHeader'][:]
    wvl = h5['wvl'][:]

# Load Testing Inputs
Xtest = pd.read_csv('refInterp.csv').to_numpy()

Data is available in the repository files.

Evaluation methodology

Prediction accuracy

  • Scenario A: Mean Relative Error (MRE) of retrieved surface reflectance.
  • Scenario B: MRE of predicted TOA radiance.
  • $MRE_\lambda$ excludes deep water vapor absorption bands.

Final Score

The final ranking is a weighted average of the ranks in the four sub-tracks:

$$Score = 0.325 \cdot Rank_{A1} + 0.175 \cdot Rank_{A2} + 0.325 \cdot Rank_{B1} + 0.175 \cdot Rank_{B2}$$


Reproducibility

This repository contains the source code used to evaluate the challenge submissions.

Running the benchmark

  1. Clone this repository.
  2. Install dependencies: pip install -r requirements.txt.
  3. Ensure you have the reference data available (see config.py).
  4. Run the evaluation script:
python main.py

Note: This requires appropriate Hugging Face credentials if you intend to sync results with the hub.


Citation

If you utilize this code, repository, or the provided datasets in your research, please cite the following publication:

Vicent, J., et al. "Evaluating Machine Learning Emulators for Atmospheric Radiative Transfer: The AMLEC Challenge." [Journal/Proceedings Name], 2025. (Submitted)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages