Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DO NOT MERGE] - Release v2.0.0 #2465

Open
wants to merge 40 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
660be42
Dataclasses and post-processing refactor (#2098)
djdameln Sep 2, 2024
2e6e18a
Merge main and resolve conflicts (#2287)
samet-akcay Sep 2, 2024
ec5a877
Rename Item to DatasetItem (#2289)
samet-akcay Sep 2, 2024
1500db6
πŸ“š Add docstrings to dataclasses (#2292)
samet-akcay Sep 6, 2024
627be88
Refactor and restructure anomalib.data (#2302)
samet-akcay Sep 11, 2024
8543e24
Restructure unit tests and fix ruff issues (#2306)
samet-akcay Sep 18, 2024
99b4e9d
Add dataclass validators (#2307)
samet-akcay Oct 2, 2024
06daad9
πŸš€ Customisable Image Visualizer (#2334)
samet-akcay Oct 9, 2024
95115f9
Merge main the feature branch. (#2376)
samet-akcay Oct 16, 2024
6e1d870
Metrics redesign (#2326)
djdameln Nov 7, 2024
dddf707
πŸš€ Add `PreProcessor` to `AnomalyModule` (#2358)
samet-akcay Nov 8, 2024
1471974
Update v2 with the recent changes on main (#2421)
djdameln Nov 15, 2024
c16f51e
Rename `AnomalyModule` to `AnomalibModule` (#2423)
samet-akcay Nov 22, 2024
2f3d616
πŸ”¨ Replace `imgaug` with Native PyTorch Transforms (#2436)
samet-akcay Nov 27, 2024
00b01b1
[V2]: Remove task type (#2450)
djdameln Dec 5, 2024
c73e411
πŸ—‘οΈ Remove RKDE (#2455)
ashwinvaidya17 Dec 6, 2024
8bd06a9
Multi-GPU fixes (#2435)
ashwinvaidya17 Dec 10, 2024
c235ca1
πŸ”¨ v2 - Refactor: Add missing auxiliary attributes to `AnomalibModule`…
samet-akcay Dec 11, 2024
7116fec
πŸ“šUpdate `CHANGELOG.md` file for v2.0.0 release (#2463)
samet-akcay Dec 11, 2024
244f50b
πŸš€ Create a new CI Pipeline (#2461)
samet-akcay Dec 11, 2024
324e181
Resolve conflicts
samet-akcay Dec 11, 2024
31ad999
Exclude tiled ensemble for now
samet-akcay Dec 11, 2024
4635158
Install the required pytest plugins
samet-akcay Dec 11, 2024
dda0421
Disable parallel execution for now
samet-akcay Dec 11, 2024
d4c8f82
Fix the status of the unit tests
samet-akcay Dec 12, 2024
b8455b4
Automatically set the device to run the unit/integration tests
samet-akcay Dec 12, 2024
40d27e9
Enhance the device management for unit/integration tests
samet-akcay Dec 12, 2024
a960a18
Add cpu and gpu markers to tests to be able to explicitly choose them
samet-akcay Dec 12, 2024
6e98863
Handle duration, and fix failed tests status
samet-akcay Dec 12, 2024
2d44435
Add gpu marker to the tests that require gpu device
samet-akcay Dec 12, 2024
1a7976f
Enhance the error message on tests
samet-akcay Dec 12, 2024
4e0ec91
pass pytest native marker to trigger cpu and gpu tests
samet-akcay Dec 12, 2024
1b62c34
Remove gpu marker
samet-akcay Dec 12, 2024
80007fe
Create a new caching pipeline
samet-akcay Dec 12, 2024
4b9f6fb
Add enable-cache as an input argument
samet-akcay Dec 12, 2024
f4646cd
Add enable-cache as an input argument
samet-akcay Dec 12, 2024
f245ace
pass cache none for self-hosted runner
samet-akcay Dec 12, 2024
461f6d7
fix the warning messages
samet-akcay Dec 12, 2024
9887c94
Map semgrep severity level keys
samet-akcay Dec 12, 2024
7251739
Add coverage args
samet-akcay Dec 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
408 changes: 0 additions & 408 deletions .ci/ipas_default.config

This file was deleted.

11 changes: 0 additions & 11 deletions .ci/trivy.yaml

This file was deleted.

2 changes: 0 additions & 2 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
/notebooks/200_models @samet-akcay
/notebooks/300_benchmarking @ashwinvaidya17
/notebooks/400_openvino @samet-akcay
/notebooks/500_use_cases @paularamo
/notebooks/README.md @samet-akcay

# Requirements
Expand All @@ -41,7 +40,6 @@
/src/anomalib/models/padim @samet-akcay
/src/anomalib/models/patchcore @djdameln
/src/anomalib/models/reverse_distillation @ashwinvaidya17
/src/anomalib/models/rkde @djdameln
/src/anomalib/models/stfpm @samet-akcay

/src/anomalib/post_processing @ashwinvaidya17 @djdameln
Expand Down
116 changes: 116 additions & 0 deletions .github/actions/code-quality/pre-commit/action.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# Pre-commit Quality Action
#
# This composite action executes pre-commit hooks for code quality checks
# with configurable Python and Node.js environments.
#
# Key Features:
# - Pre-commit hook execution
# - Environment configuration
# - Cache management
# - Multi-language support
# - Dependency handling
#
# Process Stages:
# 1. Environment Setup:
# - Python installation
# - Node.js installation
# - Cache configuration
#
# 2. Dependency Management:
# - Pre-commit installation
# - Hook installation
# - Cache restoration
#
# 3. Quality Checks:
# - Hook execution
# - Error reporting
# - Result caching
#
# Required Inputs:
# - python-version: Python version to use
# - node-version: Node.js version to use (defaults to "20")
#
# Example Usage:
# steps:
# - uses: ./.github/actions/code-quality/pre-commit
# with:
# python-version: "3.11"
#
# Note: Requires configured pre-commit hooks in repository

name: "Pre-commit Quality Checks"
description: "Runs pre-commit hooks for code quality checks"

inputs:
python-version:
description: "Python version to use"
required: false
default: "3.10"
node-version:
description: "Node.js version to use"
required: false
default: "20"
skip:
description: "Comma-separated list of hooks to skip"
required: false
default: ""
cache:
description: "Whether to use caching"
required: false
default: "true"

outputs:
cache-hit:
description: "Whether the cache was hit"
value: ${{ steps.pre-commit-cache.outputs.cache-hit }}

runs:
using: composite
steps:
# Set up Python environment with caching
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
cache: pip # Enable pip caching
cache-dependency-path: .pre-commit-config.yaml

# Set up Node.js for JavaScript-related hooks
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}

# Install pre-commit with latest pip
- name: Install pre-commit
shell: bash
run: |
python -m pip install --upgrade pip
pip install pre-commit

# Cache pre-commit hooks to speed up subsequent runs
- name: Cache pre-commit hooks
if: inputs.cache == 'true'
id: pre-commit-cache
uses: actions/cache@v3
with:
path: ~/.cache/pre-commit
# Cache key includes Python and Node versions to ensure correct environment
key: pre-commit-${{ runner.os }}-py${{ inputs.python-version }}-node${{ inputs.node-version }}-${{ hashFiles('.pre-commit-config.yaml') }}
restore-keys: |
pre-commit-${{ runner.os }}-py${{ inputs.python-version }}-node${{ inputs.node-version }}-
pre-commit-${{ runner.os }}-py${{ inputs.python-version }}-

# Execute pre-commit checks with optional hook skipping
- name: Run pre-commit checks
shell: bash
env:
SKIP: ${{ inputs.skip }}
run: |
if [ -n "$SKIP" ]; then
# Run specific hooks if skip parameter is provided
pre-commit run --all-files --hook-stage="$SKIP"
else
# Run all hooks if no skip parameter
pre-commit run --all-files
fi
217 changes: 217 additions & 0 deletions .github/actions/pytest/action.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
# Test Runner Action
#
# This composite action executes Python tests with pytest, providing
# comprehensive test execution and reporting capabilities.
#
# Key Features:
# - Multiple test type support
# - Parallel execution
# - Coverage reporting
# - Performance tracking
# - Result analysis
#
# Process Stages:
# 1. Environment Setup:
# - Python configuration
# - Virtual environment creation
# - Dependency installation
#
# 2. Test Execution:
# - Test scope determination
# - Parallel processing
# - Coverage tracking
# - Performance monitoring
#
# 3. Results Processing:
# - Coverage analysis
# - Performance reporting
# - Results aggregation
#
# Required Inputs:
# - python-version: Python version for tests
# - test-type: Type of tests to run
# - codecov-token: Token for coverage upload
# - max-test-time: Maximum test duration
# - device: Device to run tests on (cpu/gpu)
# - enable-cache: Enable pip caching
#
# Outputs:
# - coverage-percentage: Total coverage
# - tests-passed: Test success status
# - test-duration: Execution time
#
# Example Usage:
# steps:
# - uses: ./.github/actions/pytest
# with:
# python-version: "3.11"
# test-type: "unit"
# codecov-token: ${{ secrets.CODECOV_TOKEN }}
#
# Note: Requires proper pytest configuration in pyproject.toml

name: "Python Tests Runner"
description: "Runs Python unit and integration tests with pytest and uploads coverage to Codecov"

inputs:
python-version:
description: "Python version to use"
required: false
default: "3.10"
test-type:
description: "Type of tests to run (unit/integration/all)"
required: false
default: "all"
codecov-token:
description: "Codecov upload token"
required: true
max-test-time:
description: "Maximum time in seconds for the test suite to run"
required: false
default: "3600"
device:
description: "Device to run tests on (cpu/gpu)"
required: false
default: "gpu"
enable-cache:
description: "Enable pip caching"
required: false
default: "true"

outputs:
coverage-percentage:
description: "Total coverage percentage"
value: ${{ steps.coverage.outputs.percentage }}
tests-passed:
description: "Whether all tests passed"
value: ${{ steps.test-execution.outputs.success }}
test-duration:
description: "Total test duration in seconds"
value: ${{ steps.test-execution.outputs.duration }}

runs:
using: composite
steps:
# Set up Python with pip caching
- name: Set up Python environment
uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
cache: ${{ inputs.enable-cache == 'true' && 'pip' || '' }}
cache-dependency-path: ${{ inputs.enable-cache == 'true' && 'pyproject.toml' || '' }}

# Create and configure virtual environment
- name: Configure virtual environment
id: setup-venv
shell: bash
run: |
# Create isolated test environment
python -m venv .venv
source .venv/bin/activate
# Install dependencies with dev extras
python -m pip install --upgrade pip
pip install ".[dev]"
pip install codecov

# Determine which tests to run based on input
- name: Determine test scope
id: test-scope
shell: bash
run: |
case "${{ inputs.test-type }}" in
"unit")
echo "path=tests/unit" >> $GITHUB_OUTPUT
;;
"integration")
echo "path=tests/integration" >> $GITHUB_OUTPUT
;;
*)
# Run both test types if not specified
echo "path=tests/unit tests/integration" >> $GITHUB_OUTPUT
;;
esac

- name: Execute test suite
id: test-execution
shell: bash
continue-on-error: true
run: |
source .venv/bin/activate
start_time=$(date +%s)

# Set device-specific pytest arguments
if [ "${{ inputs.device }}" = "cpu" ]; then
marker="-m cpu" # Only run CPU tests
else
marker="" # Run all tests (both CPU and GPU marked tests)
fi

# Run pytest
PYTHONPATH=src pytest ${{ steps.test-scope.outputs.path }} \
--numprocesses=0 \
--durations=10 \
--durations-min=1.0 \
--timeout=${{ inputs.max-test-time }} \
--verbosity=1 \
--cov=src \
--cov-report=xml \
--cov-report=term-missing \
${marker}

test_exit_code=${PIPESTATUS[0]}

# Calculate and store duration
end_time=$(date +%s)
duration=$((end_time - start_time))
echo "duration=$duration" >> $GITHUB_OUTPUT
echo "success=$([[ $test_exit_code == 0 ]] && echo true || echo false)" >> $GITHUB_OUTPUT

# Store test results summary
if [ $test_exit_code -ne 0 ]; then
echo "::error::Tests failed. See summary below:"
echo "----------------------------------------"
# Extract the summary section from pytest output
sed -n '/=* short test summary info =*/,$p' pytest_output.log || true
echo "----------------------------------------"
echo "Full test output saved to artifacts"
fi

exit $test_exit_code

- name: Upload test results
if: always() && steps.test-execution.outcome == 'failure'
uses: actions/upload-artifact@v3
with:
name: pytest-results-${{ inputs.test-type }}
path: pytest_output.log
retention-days: 7

- name: Check test results
if: always() && steps.test-execution.outcome == 'failure'
shell: bash
run: exit 1

- name: Check test duration
if: always()
shell: bash
run: |
duration="${{ steps.test-execution.outputs.duration }}"
if [ -n "$duration" ]; then
echo "Test Duration: $duration seconds"

if [ "$duration" -gt "${{ inputs.max-test-time }}" ]; then
echo "::warning::Test suite exceeded recommended duration of ${{ inputs.max-test-time }} seconds"
fi
else
echo "Test Duration: Not available"
fi

- name: Upload coverage to Codecov
if: success()
shell: bash
run: |
source .venv/bin/activate
codecov --token "${{ inputs.codecov-token }}" \
--file coverage.xml \
--flags "${{ inputs.test-type }}_py${{ inputs.python-version }}" \
--name "${{ inputs.test-type }} tests (Python ${{ inputs.python-version }})"
Loading
Loading