Skip to content

test(yolo-tracker): Define CPU vs GPU test strategy and markers #50

@rogermt

Description

@rogermt

Problem

After DRY refactor and team_classification removal, we need clear CPU/GPU test separation:

  1. CPU-only tests should NOT require models

    • Fast tests (<1s)
    • No YOLO model loading
    • Mock dependencies
    • Run by default in CI
  2. GPU tests require actual models

    • Slow tests (10-30s)
    • Require RUN_MODEL_TESTS=1
    • Require downloaded model files
    • Run only on GPU environments (Kaggle/Colab)

Current Issues

  • Test markers inconsistent across files
  • Some tests incorrectly marked as GPU-only
  • MODEL_PATH exports break when models don't exist (CPU environment)
  • test_models_directory_structure.py fails on CPU (expects actual models)

Solution

Test Configuration Pattern

All GPU tests must follow:

import os
import pytest
from tests.constants import RUN_MODEL_TESTS

@pytest.mark.skipif(
    not RUN_MODEL_TESTS,
    reason="Set RUN_MODEL_TESTS=1 to run (requires GPU/models)"
)
def test_requires_gpu():
    pass

Files to Fix

  • test_models_directory_structure.py - MODEL_PATH imports fail on CPU
  • test_video_model_paths.py - Same issue
  • test_plugin.py - analyze() needs GPU models
  • test_plugin_all_detections.py - Same
  • Inference modules - Don't export MODEL_PATH (causes import errors on CPU)

Definition

CPU Tests: Run always

  • Unit tests with mocks
  • Config tests
  • Utils tests

GPU Tests: Run with RUN_MODEL_TESTS=1

  • Inference tests needing actual YOLO models
  • Plugin.analyze() tests
  • Model file size validation

Related Issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions