Skip to content

Commit

Permalink
Improve README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
timcallow authored Nov 8, 2023
1 parent 5f7eb38 commit 9536fa8
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions tests/dev_tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,16 @@ The development tests themselves are not directly included. Instead, the reposit
- `test.py`: The template for creating individual test scripts
- `submit.slurm`: A sample SLURM submission script for use on HPC systems
- `run_benchmark_tests.py`: A script that demonstrates how to run the entire testing workflow using the provided tools
- `comp_benchmark_tests.py`: A script that compares the results from two csv files generated from `run_benchmark_tests.py`

## Environment assumption

The testing workflow currently assumes that atoMEC is operated within a Conda virtual environment.

## Evaluation and benchmarking protocol

Benchmarking should be conducted against the results from the most recent iteration of the development branch. This means that *two* testing workflows should be set-up, one for the branch being submitted as a PR, and one for atoMEC's development branch. Performance improvements could be justified using various statistical metrics.

## Execution Instructions

The full testing workflow can be run on a slurm-based HPC system with the `run_benchmark_tests.py` script. The script needs to be first run in "setup_and_run" mode, which sets up the calculations and submits them to the slurm system (these steps can also be run separately if preferred). Then it should be run in "evaluate" mode, to collect and summarize the results.

## Evaluation and benchmarking protocol

Benchmarking should be conducted against the results from the most recent iteration of the development branch. This means that *two* testing workflows should be set-up, one for the branch being submitted as a PR, and one for atoMEC's development branch. After generating the results, performance can be compared by running the `comp_benchmark_tests.py` script. The most important benchmark is considered to be the "Average time % difference", an average of the row-by-row percentage difference between the times taken.

0 comments on commit 9536fa8

Please sign in to comment.