Skip to content

Integrate Test Suite Validation Before Release Builds in CI/CD Pipeline #22

@stphung

Description

@stphung

Overview

The current CI/CD pipeline builds and releases all 4 platforms (Windows, Linux, Android, Web) without running the comprehensive test suite first. This creates a risk of releasing broken builds and doesn't follow professional quality gates. We need to integrate test validation before any release artifacts are created.

Current State Analysis

Working CI/CD Pipeline

  • 4-Platform Builds: Windows, Linux, Android, Web (~4-5 minutes total)
  • Automatic Versioning: Patch increments + release creation
  • 100% Build Success Rate: All platforms build successfully
  • GitHub Pages Deployment: Automatic web deployment

Missing Quality Gates

  • No Test Validation: Builds proceed without running test suite
  • Risk of Broken Releases: Failed tests could indicate broken functionality
  • Missing Quality Assurance: No verification that code changes work correctly
  • Professional Standards Gap: Industry standard requires tests before releases

Current Test Infrastructure

From CLAUDE.md, the project has:

  • SCons Test System: scons test - Execute complete test suite
  • gdUnit4 Framework: Comprehensive testing with 135 test cases, 0 failures
  • 100% Success Rate: Clean test suite with functional behavior focus
  • Professional Architecture: Tests for core gameplay, business logic, system integration

Requirements

Core Integration

Pre-Build Test Validation

  • Add test job that runs before all export jobs
  • Use scons test to execute the complete test suite
  • Fail entire pipeline if any tests fail
  • Cache test results for faster subsequent jobs

Quality Gate Implementation

  • All export jobs depend on successful test completion
  • No releases created if tests fail
  • Clear failure messaging when tests block builds
  • Test results available as CI artifacts

Professional Workflow

  • Test → Build → Release pipeline sequence
  • Parallel export jobs after test validation
  • Comprehensive test reporting in CI
  • Integration with existing auto-versioning system

Advanced Features

Test Performance Optimization

  • Cache test dependencies and build artifacts
  • Parallel test execution where possible
  • Optimized SCons test configuration for CI
  • Fast-fail on critical test failures

Comprehensive Reporting

  • Test results uploaded as CI artifacts
  • HTML test reports for detailed analysis
  • Test coverage metrics (if available)
  • Performance benchmarks for critical systems

Conditional Testing

  • Skip tests for documentation-only changes
  • Different test levels for different change types
  • Emergency override for critical hotfixes
  • Pull request test validation

Technical Implementation

New CI/CD Workflow Structure

Current Flow:

Push → [Build All Platforms in Parallel] → Auto-Version → Release

Proposed Flow:

Push → [Run Tests] → [Build All Platforms in Parallel] → Auto-Version → Release
       ↓ (if tests fail)
       ❌ Stop Pipeline

GitHub Actions Integration

New Test Job Configuration:

test-suite:
  name: Test Suite Validation
  runs-on: ubuntu-22.04
  container:
    image: barichello/godot-ci:4.4.1
  steps:
    - name: Checkout
      uses: actions/checkout@v4
    
    - name: Install SCons
      run: pip install scons
    
    - name: Setup Godot
      run: |
        mkdir -v -p ~/.local/share/godot/export_templates/
        mv /root/.local/share/godot/export_templates/${GODOT_VERSION}.stable ~/.local/share/godot/export_templates/${GODOT_VERSION}.stable
    
    - name: Install Dependencies
      run: |
        godot --path . --headless -s plug.gd install || true
    
    - name: Import Project Assets
      run: |
        godot --path . --headless --import --quit-after 1 || true
    
    - name: Run Complete Test Suite
      run: |
        scons test
    
    - name: Upload Test Results
      uses: actions/upload-artifact@v4
      if: always()
      with:
        name: test-results
        path: reports/

Updated Export Jobs:

export-linux:
  name: Linux Export
  needs: test-suite  # 👈 Dependency on test completion
  runs-on: ubuntu-22.04
  # ... existing configuration
  
export-windows:
  name: Windows Export
  needs: test-suite  # 👈 Dependency on test completion
  runs-on: ubuntu-22.04
  # ... existing configuration
  
export-android:
  name: Android Export
  needs: test-suite  # 👈 Dependency on test completion
  runs-on: ubuntu-22.04
  # ... existing configuration
  
export-web:
  name: Web Export
  needs: test-suite  # 👈 Dependency on test completion
  runs-on: ubuntu-22.04
  # ... existing configuration

SCons Integration

Optimized Test Configuration for CI:

# site_scons/ci_testing.py
def run_ci_tests(env):
    """Run tests optimized for CI environment"""
    # Fast test execution
    # Generate CI-friendly reports
    # Return proper exit codes
    return test_result

# SConstruct additions
if env.get('CI_MODE'):
    env.AddMethod(run_ci_tests, 'RunCITests')

Test Command Enhancement:

# Enhanced test execution with CI optimizations
scons test-ci  # Optimized for CI environment
scons test     # Full local testing (existing)

Implementation Strategy

Phase 1: Basic Test Integration (HIGH PRIORITY)

  • Add test job to GitHub Actions workflow
  • Configure all export jobs to depend on test success
  • Basic test failure handling and reporting
  • Verify SCons test system works in CI environment

Phase 2: Enhanced Reporting (MEDIUM PRIORITY)

  • Upload test results as artifacts
  • HTML test reports for detailed analysis
  • Integration with GitHub Actions status checks
  • Test performance optimization

Phase 3: Advanced Features (LOW PRIORITY)

  • Conditional testing based on change types
  • Test coverage reporting
  • Performance benchmarks
  • Emergency override mechanisms

Testing the Implementation

Validation Steps

  1. Create intentional test failure to verify pipeline stops
  2. Fix test and verify builds proceed normally
  3. Test with different commit types (docs, code, etc.)
  4. Verify auto-versioning still works after test integration
  5. Check release creation only happens after successful tests

Success Criteria

  • Zero False Releases: No releases created with failing tests
  • Fast Feedback: Test failures reported within 2-3 minutes
  • Maintained Performance: Total pipeline time increases by <2 minutes
  • Clear Reporting: Test status visible in GitHub Actions UI
  • Reliable Integration: 100% compatibility with existing auto-versioning

Risk Assessment & Mitigation

Potential Risks

  • Increased Pipeline Time: +1-3 minutes for test execution
  • False Failures: Flaky tests blocking valid releases
  • CI Resource Usage: Additional compute time for test jobs

Mitigation Strategies

  • Optimize Test Suite: Use fastest possible test configuration
  • Parallel Execution: Tests run while preparing build environment
  • Emergency Override: Manual trigger for critical hotfixes
  • Test Stability: Focus on reliable, non-flaky tests

Benefits

Quality Assurance

  • Professional Standards: Industry-standard test-before-release workflow
  • Bug Prevention: Catch regressions before they reach users
  • Confidence: Guaranteed working releases
  • Documentation: Test results provide build quality evidence

Development Workflow

  • Fast Feedback: Quick notification of test failures
  • Automated Quality: No manual test running required
  • Integration: Seamless with existing development practices
  • Reliability: Consistent quality across all releases

Priority: High (professional quality gates)
Complexity: Medium (CI/CD workflow modification)
Impact: Prevents broken releases, ensures quality standards

Dependencies: Working SCons test system, gdUnit4 framework, existing CI/CD pipeline

Success Metric: Zero releases with failing tests, maintained development velocity

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions