Skip to content

Commit

Permalink
Merge pull request #215 from timcallow/dev_tests
Browse files Browse the repository at this point in the history
Introduce development tests to benchmark performance
  • Loading branch information
timcallow authored Nov 8, 2023
2 parents 4c6e084 + 9536fa8 commit d7e78fc
Show file tree
Hide file tree
Showing 7 changed files with 671 additions and 0 deletions.
28 changes: 28 additions & 0 deletions tests/dev_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Development test suite

## Overview

This directory contains the necessary code and data to enable the generation and execution of development tests for the atoMEC project. These tests are dsigned to evaluate the _performance_ of the code, with a focus on the `CalcEnergy` function, its related components, and behavior under extreme edge cases. They are distinct from the CI tests, which are designed to check the _correctness_ of the code across the full codebase. They are not mandatory but are recommended for developers making significant changes to performance-critical parts of the code, especially when modifications impact the execution time observed in CI tests.

## Development testing tools

The development tests themselves are not directly included. Instead, the repository provides the necessary tools to generate and run these tests:

- `benchmarking.py`: The core module containing functions to set up the benchmarking environment
- `pressure_benchmarks.csv`: The dataset containing parameters for generating test cases
- `test.py`: The template for creating individual test scripts
- `submit.slurm`: A sample SLURM submission script for use on HPC systems
- `run_benchmark_tests.py`: A script that demonstrates how to run the entire testing workflow using the provided tools
- `comp_benchmark_tests.py`: A script that compares the results from two csv files generated from `run_benchmark_tests.py`

## Environment assumption

The testing workflow currently assumes that atoMEC is operated within a Conda virtual environment.

## Execution Instructions

The full testing workflow can be run on a slurm-based HPC system with the `run_benchmark_tests.py` script. The script needs to be first run in "setup_and_run" mode, which sets up the calculations and submits them to the slurm system (these steps can also be run separately if preferred). Then it should be run in "evaluate" mode, to collect and summarize the results.

## Evaluation and benchmarking protocol

Benchmarking should be conducted against the results from the most recent iteration of the development branch. This means that *two* testing workflows should be set-up, one for the branch being submitted as a PR, and one for atoMEC's development branch. After generating the results, performance can be compared by running the `comp_benchmark_tests.py` script. The most important benchmark is considered to be the "Average time % difference", an average of the row-by-row percentage difference between the times taken.
Loading

0 comments on commit d7e78fc

Please sign in to comment.