Skip to content

Conversation

@bxf12315
Copy link
Contributor

@bxf12315 bxf12315 commented Dec 24, 2025

Summary by Sourcery

Introduce an admin API to prune SBOMs based on ingestion date with optional dry-run support and concurrent batched deletion, expose it via OpenAPI, and cover it with tests.

New Features:

  • Add an admin /api/v2/admin/sbom/prune endpoint to prune SBOMs older than a specified ingestion age with configurable batching and concurrency and optional dry-run mode.

Enhancements:

  • Wire the new admin module into the fundamental service configuration and make the SbomService clonable to support the new endpoint.

Tests:

  • Add integration tests for the SBOM prune admin endpoint covering dry-run, actual deletion, and no-match scenarios.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 24, 2025

Reviewer's Guide

Adds an admin SBOM pruning API, including OpenAPI schema, implementation, wiring into the fundamental service configuration, and integration tests for dry-run and deletion behaviors.

Sequence diagram for admin SBOM prune endpoint behavior

sequenceDiagram
    actor Admin
    participant HttpClient
    participant ActixApp
    participant prune_sboms
    participant Database
    participant SbomService
    participant try_delete_sbom

    Admin->>HttpClient: POST /v2/admin/sbom/prune
    HttpClient->>ActixApp: HTTP request with PruneQuery
    ActixApp->>prune_sboms: call handler with PruneQuery and Require_DeleteSbom

    prune_sboms->>Database: begin_read()
    Database-->>prune_sboms: read_transaction
    prune_sboms->>Database: query sbom joined source_document
    Database-->>prune_sboms: Vec PrunedSbom

    alt dry_run == true
        prune_sboms->>prune_sboms: build PrunedSbomLog
        prune_sboms-->>ActixApp: 200 OK JSON PrunedSbomLog
    else dry_run == false
        loop for each PrunedSbom (concurrently, max_concurrent)
            prune_sboms->>try_delete_sbom: try_delete_sbom(sbom, db, service)
            try_delete_sbom->>Database: begin()
            Database-->>try_delete_sbom: tx
            try_delete_sbom->>SbomService: delete_sbom(sbom_id, tx)
            alt delete succeeds
                SbomService-->>try_delete_sbom: Ok
                try_delete_sbom->>Database: tx.commit()
                Database-->>try_delete_sbom: Ok
                try_delete_sbom-->>prune_sboms: Ok PrunedSbom
            else delete fails
                SbomService-->>try_delete_sbom: Err
                try_delete_sbom->>try_delete_sbom: set error on PrunedSbom
                try_delete_sbom-->>prune_sboms: Err PrunedSbom
            end
        end
        prune_sboms->>prune_sboms: separate successful and failed results
        prune_sboms->>prune_sboms: build PrunedSbomLog
        prune_sboms-->>ActixApp: 200 OK JSON PrunedSbomLog
    end

    ActixApp-->>HttpClient: HTTP response
    HttpClient-->>Admin: show prune result
Loading

Class diagram for admin SBOM prune types and services

classDiagram
    class PrunedSbom {
        +Uuid sbom_id
        +Option~String~ document_id
        +Option~OffsetDateTime~ published
        +Vec~String~ authors
        +Vec~String~ suppliers
        +Vec~String~ data_licenses
        +OffsetDateTime ingested
        +Option~String~ error
    }

    class PrunedSbomLog {
        +u64 total
        +u64 successful_total
        +u64 failed_total
        +Vec~PrunedSbom~ successful_pruned
        +Vec~PrunedSbom~ failed_pruned
    }

    class PruneQuery {
        +u32 ingested
        +bool dry_run
        +u64 batch_size
        +usize max_concurrent
    }

    class SbomService {
        -Database db
        +new(db: Database) SbomService
        +delete_sbom(sbom_id: Uuid, tx: Transaction) Result
        +clone() SbomService
    }

    class AdminEndpoints {
        +try_delete_sbom(sbom: PrunedSbom, db: WebDataDatabase, service: WebDataSbomService) Result~PrunedSbom,PrunedSbom~
        +prune_sboms(service: WebDataSbomService, db: WebDataDatabase, query: PruneQuery, require_delete: RequireDeleteSbom) ResultResponder
        +configure(config: ServiceConfig, db: Database) void
    }

    class Database {
        +begin() Transaction
        +begin_read() ReadTransaction
    }

    class RequireDeleteSbom

    PrunedSbomLog "*" --> "1" PrunedSbom : contains
    AdminEndpoints --> PrunedSbom : builds
    AdminEndpoints --> PrunedSbomLog : returns
    AdminEndpoints --> PruneQuery : uses
    AdminEndpoints --> SbomService : uses
    AdminEndpoints --> Database : uses
    AdminEndpoints --> RequireDeleteSbom : authorizes
    SbomService --> Database : holds
Loading

File-Level Changes

Change Details Files
Expose a new admin SBOM prune HTTP API in OpenAPI and Rust, including request parameters and response models for prune results.
  • Add /api/v2/admin/sbom/prune POST endpoint definition with ingested age, dry_run, batch_size, and max_concurrent query parameters to openapi.yaml.
  • Define PrunedSbom and PrunedSbomLog schemas in OpenAPI components to model per-SBOM metadata and prune summary statistics.
  • Implement PrunedSbom, PrunedSbomLog, and PruneQuery structs in Rust with serde/utoipa annotations, including aliasing for kebab_case query parameters and auth requirements.
  • Add prune_sboms handler to query SBOMs older than a computed cutoff, optionally delete them, and return a structured log.
  • Implement try_delete_sbom helper that wraps SbomService::delete_sbom in a transaction and records errors per SBOM.
openapi.yaml
modules/fundamental/src/admin/endpoints/mod.rs
Wire the new admin module and SBOM prune endpoint into the fundamental service and enable concurrent deletion support.
  • Introduce a new admin module with an endpoints submodule and a configure() function that registers the prune_sboms service and its dependencies.
  • Expose the admin module from the fundamental crate root and call admin::endpoints::configure from the main endpoints configuration.
  • Make SbomService clonable so it can be shared across concurrent tasks and admin endpoints.
  • Add futures as a dependency and use futures-util stream utilities to process deletions concurrently with buffer_unordered.
modules/fundamental/src/admin/mod.rs
modules/fundamental/src/endpoints.rs
modules/fundamental/src/lib.rs
modules/fundamental/src/sbom/service/mod.rs
modules/fundamental/Cargo.toml
Add integration tests to validate SBOM prune behavior for dry-run, actual deletion, and no-match cases.
  • Create helper to ingest test SBOMs and adjust their ingested timestamp using sea_orm and time to simulate old or recent data.
  • Add verify_sbom_exists helper that checks the existing GET /api/v2/sbom/{id} endpoint for OK/NOT_FOUND after prune operations.
  • Implement tests for dry-run pruning (ensuring counts and that SBOMs remain), actual deletion (ensuring SBOMs are removed), and no-match scenarios (ensuring zero totals and no deletions).
  • Include a commented-out test scaffold for validating error field behavior on successful/failed prunes.
modules/fundamental/src/admin/endpoints/test.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@bxf12315
Copy link
Contributor Author

@sourcery-ai summary

@bxf12315
Copy link
Contributor Author

@sourcery-ai guide

@bxf12315
Copy link
Contributor Author

@sourcery-ai review

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 7 issues, and left some high level feedback:

  • The OpenAPI spec for /api/v2/admin/sbom/prune declares dry_run, batch_size, and max_concurrent as query parameters and a response of Vec<PrunedSbomLog>, but the handler expects dry-run/batch-size/max-concurrent (via serde aliases) and returns a single PrunedSbomLog object; align the parameter names and response shape between openapi.yaml and the Rust handler.
  • batch_size and max_concurrent are allowed to be 0 in the OpenAPI schema and are not validated in the handler, but buffer_unordered(0) will panic and limit(0) makes little sense; add server-side validation to enforce sensible minimums (e.g. >= 1) and update the schema accordingly or handle the 0 case explicitly.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The OpenAPI spec for `/api/v2/admin/sbom/prune` declares `dry_run`, `batch_size`, and `max_concurrent` as query parameters and a response of `Vec<PrunedSbomLog>`, but the handler expects `dry-run`/`batch-size`/`max-concurrent` (via serde aliases) and returns a single `PrunedSbomLog` object; align the parameter names and response shape between `openapi.yaml` and the Rust handler.
- `batch_size` and `max_concurrent` are allowed to be `0` in the OpenAPI schema and are not validated in the handler, but `buffer_unordered(0)` will panic and `limit(0)` makes little sense; add server-side validation to enforce sensible minimums (e.g. `>= 1`) and update the schema accordingly or handle the `0` case explicitly.

## Individual Comments

### Comment 1
<location> `modules/fundamental/src/admin/endpoints/mod.rs:86-87` </location>
<code_context>
+    tag = "admin",
+    operation_id = "pruneSboms",
+    params(PruneQuery),
+    responses(
+        (status = 200, description = "List of pruned SBOMs", body = Vec<PrunedSbomLog>),
+        (status = 500, description = "Internal server error"),
+    ),
</code_context>

<issue_to_address>
**issue (bug_risk):** The utoipa response type declares `Vec<PrunedSbomLog>` but the handler returns a single `PrunedSbomLog`.

The handler returns a single `PrunedSbomLog` via `HttpResponse::Ok().json(log)`, but the utoipa spec documents a `Vec<PrunedSbomLog>`, so the OpenAPI schema will incorrectly show an array. This can break generated clients and mislead consumers. Please update the annotation to `body = PrunedSbomLog` (and tweak the description if appropriate) so the documented response matches the implementation.
</issue_to_address>

### Comment 2
<location> `openapi.yaml:66-72` </location>
<code_context>
+          type: integer
+          minimum: 0
+      responses:
+        '200':
+          description: List of pruned SBOMs
+          content:
+            application/json:
+              schema:
+                type: array
+                items:
+                  $ref: '#/components/schemas/PrunedSbomLog'
+        '500':
</code_context>

<issue_to_address>
**issue (bug_risk):** The OpenAPI 200 response schema is an array of `PrunedSbomLog`, but the implementation returns a single log object.

Given this is documented as a list, generated clients will expect an array of `PrunedSbomLog`. Since the handler returns a single object, the schema should instead directly reference `PrunedSbomLog` (dropping `type: array` and `items`) so the OpenAPI spec matches the actual response shape.
</issue_to_address>

### Comment 3
<location> `modules/fundamental/src/admin/endpoints/mod.rs:52-54` </location>
<code_context>
+    /// Number of SBOMs to process in a single batch
+    #[serde(alias = "batch-size")]
+    pub batch_size: u64,
+    /// Maximum number of concurrent operations
+    #[serde(alias = "max-concurrent")]
+    pub max_concurrent: usize,
+}
+
</code_context>

<issue_to_address>
**issue:** `max_concurrent` allows zero, which can lead to surprising behavior or hangs when used with `buffer_unordered`.

The query struct and OpenAPI spec currently allow `max_concurrent = 0`, but this is passed directly to `buffer_unordered(max_concurrent)`, where zero means “never poll” and can stall processing.

Please ensure `max_concurrent >= 1` (e.g., validation, clamping, or a non-zero type) and update the OpenAPI schema (`minimum: 1`) so clients cannot request an unusable concurrency level.
</issue_to_address>

### Comment 4
<location> `modules/fundamental/src/admin/endpoints/test.rs:43-45` </location>
<code_context>
+
+    // Manually update the ingested date to be older than specified days for testing
+    let old_date = OffsetDateTime::now_utc() - time::Duration::days(days_old);
+    source_document::Entity::update_many()
+        .col_expr(source_document::Column::Ingested, old_date.into())
+        .exec(&ctx.db)
+        .await?;
+
</code_context>

<issue_to_address>
**suggestion (testing):** Limit `update_many()` to the SBOMs created in the test to avoid cross-test coupling

`ingest_test_sboms_with_old_date` currently updates `Ingested` for *all* `source_document` rows, which ties this helper to global DB state and can cause flaky tests as more fixtures are added. Please add a `filter` (e.g., by `document_id` or other known IDs) so it only updates the SBOMs created by this helper.

Suggested implementation:

```rust
    // Manually update the ingested date to be older than specified days for testing
    let old_date = OffsetDateTime::now_utc() - time::Duration::days(days_old);

    // Only update the SBOMs created in this helper to avoid cross-test coupling
    source_document::Entity::update_many()
        .filter(
            source_document::Column::DocumentId.is_in(vec![
                result_spdx.document_id,
                result_cyclonedx.document_id,
            ]),
        )
        .col_expr(source_document::Column::Ingested, old_date.into())
        .exec(&ctx.db)
        .await?;

```

Depending on the actual schema/types, you may need to adjust:
1. The column used in the filter:
   - If the `source_document` table uses a different column name (e.g. `Id`, `SourceDocumentId`, or `SbomId`), replace `source_document::Column::DocumentId` with the correct column.
2. The fields on `result_spdx` / `result_cyclonedx`:
   - If the ingest helper returns a different field name (e.g. `source_document_id`, `id`, or `sbom_id`), replace `.document_id` with the appropriate field.
3. Ensure the necessary SeaORM traits are in scope:
   - If not already imported, add `use sea_orm::{ColumnTrait, QueryFilter};` near the top of the file so that `.filter` and `.is_in` compile.
</issue_to_address>

### Comment 5
<location> `modules/fundamental/src/admin/endpoints/test.rs:123-132` </location>
<code_context>
+async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
</code_context>

<issue_to_address>
**suggestion (testing):** Add a scenario where delete fails to exercise `failed_pruned` and `error` handling

The current integration tests only cover successful deletion or no-op cases, so the failure path of `try_delete_sbom` and the aggregation into `PrunedSbomLog` are untested. Please add a test that forces a deletion failure (e.g., via a stubbed/mocked `SbomService` or an undeletable `PrunedSbom`) and assert that `failed_total` is incremented, the item is included in `failed_pruned`, and its `error` field is populated.

Suggested implementation:

```rust
#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
    // Ingest test SBOMs with old date (100 days ago)
    let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;

    let app = caller(ctx).await?;

    // Create test request with dry-run=false
    let req = TestRequest::post()
        .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
        .to_request();

    // Execute prune and deserialize response
    let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;

    // Verify that both SBOMs were actually deleted
    assert_eq!(log.deleted_total, 2);
    assert!(log.failed_pruned.is_empty());
    assert_eq!(log.failed_total, 0);

    // Verify SBOMs no longer exist
    verify_sbom_exists(&app, &result_spdx, StatusCode::NOT_FOUND).await?;
    verify_sbom_exists(&app, &result_cyclonedx, StatusCode::NOT_FOUND).await?;

    Ok(())
}

#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_deletion_failure(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
    // Ingest a single old SBOM that we will force to fail on deletion
    let (result_spdx, _) = ingest_test_sboms_with_old_date(ctx, 100).await?;

    // Configure the SBOM service (or underlying storage) to fail deletion
    //
    // NOTE: this assumes a test-only hook on the SbomService that makes the next
    // delete operation for the given id return an error. See <additional_changes>.
    ctx.services()
        .sbom_service()
        .fail_next_delete(result_spdx.id.clone())
        .await?;

    let app = caller(ctx).await?;

    // Trigger prune with parameters that should include our SBOM
    let req = TestRequest::post()
        .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
        .to_request();

    let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;

    // We expect one failed deletion and no successful deletions for this scenario
    assert_eq!(log.deleted_total, 0);
    assert_eq!(log.failed_total, 1);
    assert_eq!(log.failed_pruned.len(), 1);

    let failed = &log.failed_pruned[0];

    // The failed item should reference the SBOM we tried to delete
    assert_eq!(failed.sbom_id, result_spdx.id);
    // And an error message should be recorded
    assert!(failed.error.is_some());
    assert!(!failed.error.as_ref().unwrap().is_empty());

    // Because deletion failed, the SBOM should still be present
    verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;

    Ok(())
}

```

.
    ctx.services()
        .sbom_service()
        .fail_next_delete(result_spdx.id.clone())
        .await?;

    let app = caller(ctx).await?;

    // Trigger prune with parameters that should include our SBOM
    let req = TestRequest::post()
        .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
        .to_request();

    let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;

    // We expect one failed deletion and no successful deletions for this scenario
    assert_eq!(log.deleted_total, 0);
    assert_eq!(log.failed_total, 1);
    assert_eq!(log.failed_pruned.len(), 1);

    let failed = &log.failed_pruned[0];

    // The failed item should reference the SBOM we tried to delete
    assert_eq!(failed.sbom_id, result_spdx.id);
    // And an error message should be recorded
    assert!(failed.error.is_some());
    assert!(!failed.error.as_ref().unwrap().is_empty());

    // Because deletion failed, the SBOM should still be present
    verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;

    Ok(())
}
>>>>>>> REPLACE
</file_operation>
</file_operations>

<additional_changes>
1. Ensure `PrunedSbomLog` is imported into this test module if it is not already:
   - `use crate::admin::endpoints::PrunedSbomLog;` or the correct path for the prune response type.
2. The helper `verify_sbom_exists` must already support checking for `StatusCode::NOT_FOUND` and `StatusCode::OK` as used above.
3. Implement the test hook `fail_next_delete` (or equivalent) on the SBOM deletion path:
   - Add an async method like `fail_next_delete(&self, id: SbomId)` on your `SbomService` (or a dedicated test-only implementation) that records an id for which `delete_sbom` will return an error.
   - In `delete_sbom` / `try_delete_sbom`, check this flag/id and return an `Err(anyhow!(...))` instead of performing deletion when matched.
   - Ensure tests share the same `SbomService` instance used by `caller(ctx)` so the hook set in the test is visible when the HTTP handler invokes the service.
4. Adjust field names if `PrunedSbomLog` uses different ones (e.g. `failed` instead of `failed_pruned`, `total_failed` instead of `failed_total`, or `id` instead of `sbom_id`).
5. If `ctx.services().sbom_service()` is not the correct accessor chain, replace it with how the `SbomService` is retrieved from `TrustifyContext` in the rest of the codebase.
</issue_to_address>

### Comment 6
<location> `modules/fundamental/src/admin/endpoints/test.rs:51-60` </location>
<code_context>
+    Ok(())
+}
+
+#[test_context(TrustifyContext)]
+#[test(actix_web::test)]
+async fn test_prune_sboms_no_matches(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
+    // Ingest test SBOMs with old date (0 days ago)
+    let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 0).await?;
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a boundary test where the ingestion time is exactly on the cutoff

Because the pruning query uses `source_document::Column::Ingested.lt(cutoff_date)`, SBOMs ingested exactly `ingested` days ago should *not* be pruned. Current tests only cover `days_old < ingested` (should prune) and `days_old > ingested` (no prune). Please add a case where `days_old == ingested` (e.g., ingest with `days_old = 90` and call with `ingested = 90`) and assert those SBOMs are not pruned to lock in the boundary behavior and avoid off‑by‑one regressions.
</issue_to_address>

### Comment 7
<location> `modules/fundamental/src/admin/endpoints/test.rs:265-274` </location>
<code_context>
+// #[test_context(TrustifyContext)]
</code_context>

<issue_to_address>
**suggestion (testing):** Either enable and adapt the commented-out error field test or remove it

`test_prune_sboms_error_field_validation` is entirely commented out and, as written, expects both successful and failed items, which is unlikely with the current endpoint behavior (it probably only returns successful deletions). Please either:
- re-enable and adapt it to a setup that reliably produces both success and failure entries (e.g., via a failing `SbomService`), or
- remove the commented-out block to avoid dead tests and confusion about actual coverage.

Suggested implementation:

```rust

```

There are likely more commented lines belonging to `test_prune_sboms_error_field_validation` further down in the file (e.g., the rest of the function body and closing braces). Those should also be removed to fully delete the dead test. Search for the function name `test_prune_sboms_error_field_validation` and remove the entire commented-out block from its (commented) signature to the end of its body.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +123 to +132
async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
// Ingest test SBOMs with old date (100 days ago)
let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;

let app = caller(ctx).await?;

// Create test request with dry-run=false
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add a scenario where delete fails to exercise failed_pruned and error handling

The current integration tests only cover successful deletion or no-op cases, so the failure path of try_delete_sbom and the aggregation into PrunedSbomLog are untested. Please add a test that forces a deletion failure (e.g., via a stubbed/mocked SbomService or an undeletable PrunedSbom) and assert that failed_total is incremented, the item is included in failed_pruned, and its error field is populated.

Suggested implementation:

#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
    // Ingest test SBOMs with old date (100 days ago)
    let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;

    let app = caller(ctx).await?;

    // Create test request with dry-run=false
    let req = TestRequest::post()
        .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
        .to_request();

    // Execute prune and deserialize response
    let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;

    // Verify that both SBOMs were actually deleted
    assert_eq!(log.deleted_total, 2);
    assert!(log.failed_pruned.is_empty());
    assert_eq!(log.failed_total, 0);

    // Verify SBOMs no longer exist
    verify_sbom_exists(&app, &result_spdx, StatusCode::NOT_FOUND).await?;
    verify_sbom_exists(&app, &result_cyclonedx, StatusCode::NOT_FOUND).await?;

    Ok(())
}

#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_deletion_failure(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
    // Ingest a single old SBOM that we will force to fail on deletion
    let (result_spdx, _) = ingest_test_sboms_with_old_date(ctx, 100).await?;

    // Configure the SBOM service (or underlying storage) to fail deletion
    //
    // NOTE: this assumes a test-only hook on the SbomService that makes the next
    // delete operation for the given id return an error. See <additional_changes>.
    ctx.services()
        .sbom_service()
        .fail_next_delete(result_spdx.id.clone())
        .await?;

    let app = caller(ctx).await?;

    // Trigger prune with parameters that should include our SBOM
    let req = TestRequest::post()
        .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
        .to_request();

    let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;

    // We expect one failed deletion and no successful deletions for this scenario
    assert_eq!(log.deleted_total, 0);
    assert_eq!(log.failed_total, 1);
    assert_eq!(log.failed_pruned.len(), 1);

    let failed = &log.failed_pruned[0];

    // The failed item should reference the SBOM we tried to delete
    assert_eq!(failed.sbom_id, result_spdx.id);
    // And an error message should be recorded
    assert!(failed.error.is_some());
    assert!(!failed.error.as_ref().unwrap().is_empty());

    // Because deletion failed, the SBOM should still be present
    verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;

    Ok(())
}

.
ctx.services()
.sbom_service()
.fail_next_delete(result_spdx.id.clone())
.await?;

let app = caller(ctx).await?;

// Trigger prune with parameters that should include our SBOM
let req = TestRequest::post()
    .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
    .to_request();

let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;

// We expect one failed deletion and no successful deletions for this scenario
assert_eq!(log.deleted_total, 0);
assert_eq!(log.failed_total, 1);
assert_eq!(log.failed_pruned.len(), 1);

let failed = &log.failed_pruned[0];

// The failed item should reference the SBOM we tried to delete
assert_eq!(failed.sbom_id, result_spdx.id);
// And an error message should be recorded
assert!(failed.error.is_some());
assert!(!failed.error.as_ref().unwrap().is_empty());

// Because deletion failed, the SBOM should still be present
verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;

Ok(())

}

REPLACE
</file_operation>
</file_operations>

<additional_changes>

  1. Ensure PrunedSbomLog is imported into this test module if it is not already:
    • use crate::admin::endpoints::PrunedSbomLog; or the correct path for the prune response type.
  2. The helper verify_sbom_exists must already support checking for StatusCode::NOT_FOUND and StatusCode::OK as used above.
  3. Implement the test hook fail_next_delete (or equivalent) on the SBOM deletion path:
    • Add an async method like fail_next_delete(&self, id: SbomId) on your SbomService (or a dedicated test-only implementation) that records an id for which delete_sbom will return an error.
    • In delete_sbom / try_delete_sbom, check this flag/id and return an Err(anyhow!(...)) instead of performing deletion when matched.
    • Ensure tests share the same SbomService instance used by caller(ctx) so the hook set in the test is visible when the HTTP handler invokes the service.
  4. Adjust field names if PrunedSbomLog uses different ones (e.g. failed instead of failed_pruned, total_failed instead of failed_total, or id instead of sbom_id).
  5. If ctx.services().sbom_service() is not the correct accessor chain, replace it with how the SbomService is retrieved from TrustifyContext in the rest of the codebase.

Comment on lines +51 to +60
#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_dry_run(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
// Ingest test SBOMs with old date (100 days ago)
let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;

let app = caller(ctx).await?;

// Create test request with dry-run=true
let req = TestRequest::post()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider adding a boundary test where the ingestion time is exactly on the cutoff

Because the pruning query uses source_document::Column::Ingested.lt(cutoff_date), SBOMs ingested exactly ingested days ago should not be pruned. Current tests only cover days_old < ingested (should prune) and days_old > ingested (no prune). Please add a case where days_old == ingested (e.g., ingest with days_old = 90 and call with ingested = 90) and assert those SBOMs are not pruned to lock in the boundary behavior and avoid off‑by‑one regressions.

Comment on lines +265 to +274
// #[test_context(TrustifyContext)]
// #[test(actix_web::test)]
// async fn test_prune_sboms_error_field_validation(
// ctx: &TrustifyContext,
// ) -> Result<(), anyhow::Error> {
// // Ingest test SBOMs with old date (100 days ago)
// let (_result_spdx, _result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;

// let app = caller(ctx).await?;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Either enable and adapt the commented-out error field test or remove it

test_prune_sboms_error_field_validation is entirely commented out and, as written, expects both successful and failed items, which is unlikely with the current endpoint behavior (it probably only returns successful deletions). Please either:

  • re-enable and adapt it to a setup that reliably produces both success and failure entries (e.g., via a failing SbomService), or
  • remove the commented-out block to avoid dead tests and confusion about actual coverage.

Suggested implementation:

There are likely more commented lines belonging to test_prune_sboms_error_field_validation further down in the file (e.g., the rest of the function body and closing braces). Those should also be removed to fully delete the dead test. Search for the function name test_prune_sboms_error_field_validation and remove the entire commented-out block from its (commented) signature to the end of its body.

@codecov
Copy link

codecov bot commented Dec 24, 2025

Codecov Report

❌ Patch coverage is 88.28125% with 15 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.37%. Comparing base (648d488) to head (7a54039).

Files with missing lines Patch % Lines
modules/fundamental/src/admin/endpoints/mod.rs 86.04% 7 Missing and 5 partials ⚠️
modules/fundamental/src/admin/endpoints/test.rs 92.68% 0 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2190      +/-   ##
==========================================
+ Coverage   68.24%   68.37%   +0.13%     
==========================================
  Files         376      378       +2     
  Lines       21208    21336     +128     
  Branches    21208    21336     +128     
==========================================
+ Hits        14473    14589     +116     
- Misses       5868     5869       +1     
- Partials      867      878      +11     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant