-
Notifications
You must be signed in to change notification settings - Fork 35
Prune SBOMs based on ingestion date #2190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's GuideAdds an admin SBOM pruning API, including OpenAPI schema, implementation, wiring into the fundamental service configuration, and integration tests for dry-run and deletion behaviors. Sequence diagram for admin SBOM prune endpoint behaviorsequenceDiagram
actor Admin
participant HttpClient
participant ActixApp
participant prune_sboms
participant Database
participant SbomService
participant try_delete_sbom
Admin->>HttpClient: POST /v2/admin/sbom/prune
HttpClient->>ActixApp: HTTP request with PruneQuery
ActixApp->>prune_sboms: call handler with PruneQuery and Require_DeleteSbom
prune_sboms->>Database: begin_read()
Database-->>prune_sboms: read_transaction
prune_sboms->>Database: query sbom joined source_document
Database-->>prune_sboms: Vec PrunedSbom
alt dry_run == true
prune_sboms->>prune_sboms: build PrunedSbomLog
prune_sboms-->>ActixApp: 200 OK JSON PrunedSbomLog
else dry_run == false
loop for each PrunedSbom (concurrently, max_concurrent)
prune_sboms->>try_delete_sbom: try_delete_sbom(sbom, db, service)
try_delete_sbom->>Database: begin()
Database-->>try_delete_sbom: tx
try_delete_sbom->>SbomService: delete_sbom(sbom_id, tx)
alt delete succeeds
SbomService-->>try_delete_sbom: Ok
try_delete_sbom->>Database: tx.commit()
Database-->>try_delete_sbom: Ok
try_delete_sbom-->>prune_sboms: Ok PrunedSbom
else delete fails
SbomService-->>try_delete_sbom: Err
try_delete_sbom->>try_delete_sbom: set error on PrunedSbom
try_delete_sbom-->>prune_sboms: Err PrunedSbom
end
end
prune_sboms->>prune_sboms: separate successful and failed results
prune_sboms->>prune_sboms: build PrunedSbomLog
prune_sboms-->>ActixApp: 200 OK JSON PrunedSbomLog
end
ActixApp-->>HttpClient: HTTP response
HttpClient-->>Admin: show prune result
Class diagram for admin SBOM prune types and servicesclassDiagram
class PrunedSbom {
+Uuid sbom_id
+Option~String~ document_id
+Option~OffsetDateTime~ published
+Vec~String~ authors
+Vec~String~ suppliers
+Vec~String~ data_licenses
+OffsetDateTime ingested
+Option~String~ error
}
class PrunedSbomLog {
+u64 total
+u64 successful_total
+u64 failed_total
+Vec~PrunedSbom~ successful_pruned
+Vec~PrunedSbom~ failed_pruned
}
class PruneQuery {
+u32 ingested
+bool dry_run
+u64 batch_size
+usize max_concurrent
}
class SbomService {
-Database db
+new(db: Database) SbomService
+delete_sbom(sbom_id: Uuid, tx: Transaction) Result
+clone() SbomService
}
class AdminEndpoints {
+try_delete_sbom(sbom: PrunedSbom, db: WebDataDatabase, service: WebDataSbomService) Result~PrunedSbom,PrunedSbom~
+prune_sboms(service: WebDataSbomService, db: WebDataDatabase, query: PruneQuery, require_delete: RequireDeleteSbom) ResultResponder
+configure(config: ServiceConfig, db: Database) void
}
class Database {
+begin() Transaction
+begin_read() ReadTransaction
}
class RequireDeleteSbom
PrunedSbomLog "*" --> "1" PrunedSbom : contains
AdminEndpoints --> PrunedSbom : builds
AdminEndpoints --> PrunedSbomLog : returns
AdminEndpoints --> PruneQuery : uses
AdminEndpoints --> SbomService : uses
AdminEndpoints --> Database : uses
AdminEndpoints --> RequireDeleteSbom : authorizes
SbomService --> Database : holds
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
d440b4f to
ad14515
Compare
|
@sourcery-ai summary |
|
@sourcery-ai guide |
|
@sourcery-ai review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've found 7 issues, and left some high level feedback:
- The OpenAPI spec for
/api/v2/admin/sbom/prunedeclaresdry_run,batch_size, andmax_concurrentas query parameters and a response ofVec<PrunedSbomLog>, but the handler expectsdry-run/batch-size/max-concurrent(via serde aliases) and returns a singlePrunedSbomLogobject; align the parameter names and response shape betweenopenapi.yamland the Rust handler. batch_sizeandmax_concurrentare allowed to be0in the OpenAPI schema and are not validated in the handler, butbuffer_unordered(0)will panic andlimit(0)makes little sense; add server-side validation to enforce sensible minimums (e.g.>= 1) and update the schema accordingly or handle the0case explicitly.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The OpenAPI spec for `/api/v2/admin/sbom/prune` declares `dry_run`, `batch_size`, and `max_concurrent` as query parameters and a response of `Vec<PrunedSbomLog>`, but the handler expects `dry-run`/`batch-size`/`max-concurrent` (via serde aliases) and returns a single `PrunedSbomLog` object; align the parameter names and response shape between `openapi.yaml` and the Rust handler.
- `batch_size` and `max_concurrent` are allowed to be `0` in the OpenAPI schema and are not validated in the handler, but `buffer_unordered(0)` will panic and `limit(0)` makes little sense; add server-side validation to enforce sensible minimums (e.g. `>= 1`) and update the schema accordingly or handle the `0` case explicitly.
## Individual Comments
### Comment 1
<location> `modules/fundamental/src/admin/endpoints/mod.rs:86-87` </location>
<code_context>
+ tag = "admin",
+ operation_id = "pruneSboms",
+ params(PruneQuery),
+ responses(
+ (status = 200, description = "List of pruned SBOMs", body = Vec<PrunedSbomLog>),
+ (status = 500, description = "Internal server error"),
+ ),
</code_context>
<issue_to_address>
**issue (bug_risk):** The utoipa response type declares `Vec<PrunedSbomLog>` but the handler returns a single `PrunedSbomLog`.
The handler returns a single `PrunedSbomLog` via `HttpResponse::Ok().json(log)`, but the utoipa spec documents a `Vec<PrunedSbomLog>`, so the OpenAPI schema will incorrectly show an array. This can break generated clients and mislead consumers. Please update the annotation to `body = PrunedSbomLog` (and tweak the description if appropriate) so the documented response matches the implementation.
</issue_to_address>
### Comment 2
<location> `openapi.yaml:66-72` </location>
<code_context>
+ type: integer
+ minimum: 0
+ responses:
+ '200':
+ description: List of pruned SBOMs
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PrunedSbomLog'
+ '500':
</code_context>
<issue_to_address>
**issue (bug_risk):** The OpenAPI 200 response schema is an array of `PrunedSbomLog`, but the implementation returns a single log object.
Given this is documented as a list, generated clients will expect an array of `PrunedSbomLog`. Since the handler returns a single object, the schema should instead directly reference `PrunedSbomLog` (dropping `type: array` and `items`) so the OpenAPI spec matches the actual response shape.
</issue_to_address>
### Comment 3
<location> `modules/fundamental/src/admin/endpoints/mod.rs:52-54` </location>
<code_context>
+ /// Number of SBOMs to process in a single batch
+ #[serde(alias = "batch-size")]
+ pub batch_size: u64,
+ /// Maximum number of concurrent operations
+ #[serde(alias = "max-concurrent")]
+ pub max_concurrent: usize,
+}
+
</code_context>
<issue_to_address>
**issue:** `max_concurrent` allows zero, which can lead to surprising behavior or hangs when used with `buffer_unordered`.
The query struct and OpenAPI spec currently allow `max_concurrent = 0`, but this is passed directly to `buffer_unordered(max_concurrent)`, where zero means “never poll” and can stall processing.
Please ensure `max_concurrent >= 1` (e.g., validation, clamping, or a non-zero type) and update the OpenAPI schema (`minimum: 1`) so clients cannot request an unusable concurrency level.
</issue_to_address>
### Comment 4
<location> `modules/fundamental/src/admin/endpoints/test.rs:43-45` </location>
<code_context>
+
+ // Manually update the ingested date to be older than specified days for testing
+ let old_date = OffsetDateTime::now_utc() - time::Duration::days(days_old);
+ source_document::Entity::update_many()
+ .col_expr(source_document::Column::Ingested, old_date.into())
+ .exec(&ctx.db)
+ .await?;
+
</code_context>
<issue_to_address>
**suggestion (testing):** Limit `update_many()` to the SBOMs created in the test to avoid cross-test coupling
`ingest_test_sboms_with_old_date` currently updates `Ingested` for *all* `source_document` rows, which ties this helper to global DB state and can cause flaky tests as more fixtures are added. Please add a `filter` (e.g., by `document_id` or other known IDs) so it only updates the SBOMs created by this helper.
Suggested implementation:
```rust
// Manually update the ingested date to be older than specified days for testing
let old_date = OffsetDateTime::now_utc() - time::Duration::days(days_old);
// Only update the SBOMs created in this helper to avoid cross-test coupling
source_document::Entity::update_many()
.filter(
source_document::Column::DocumentId.is_in(vec![
result_spdx.document_id,
result_cyclonedx.document_id,
]),
)
.col_expr(source_document::Column::Ingested, old_date.into())
.exec(&ctx.db)
.await?;
```
Depending on the actual schema/types, you may need to adjust:
1. The column used in the filter:
- If the `source_document` table uses a different column name (e.g. `Id`, `SourceDocumentId`, or `SbomId`), replace `source_document::Column::DocumentId` with the correct column.
2. The fields on `result_spdx` / `result_cyclonedx`:
- If the ingest helper returns a different field name (e.g. `source_document_id`, `id`, or `sbom_id`), replace `.document_id` with the appropriate field.
3. Ensure the necessary SeaORM traits are in scope:
- If not already imported, add `use sea_orm::{ColumnTrait, QueryFilter};` near the top of the file so that `.filter` and `.is_in` compile.
</issue_to_address>
### Comment 5
<location> `modules/fundamental/src/admin/endpoints/test.rs:123-132` </location>
<code_context>
+async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
</code_context>
<issue_to_address>
**suggestion (testing):** Add a scenario where delete fails to exercise `failed_pruned` and `error` handling
The current integration tests only cover successful deletion or no-op cases, so the failure path of `try_delete_sbom` and the aggregation into `PrunedSbomLog` are untested. Please add a test that forces a deletion failure (e.g., via a stubbed/mocked `SbomService` or an undeletable `PrunedSbom`) and assert that `failed_total` is incremented, the item is included in `failed_pruned`, and its `error` field is populated.
Suggested implementation:
```rust
#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
// Ingest test SBOMs with old date (100 days ago)
let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;
let app = caller(ctx).await?;
// Create test request with dry-run=false
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
// Execute prune and deserialize response
let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;
// Verify that both SBOMs were actually deleted
assert_eq!(log.deleted_total, 2);
assert!(log.failed_pruned.is_empty());
assert_eq!(log.failed_total, 0);
// Verify SBOMs no longer exist
verify_sbom_exists(&app, &result_spdx, StatusCode::NOT_FOUND).await?;
verify_sbom_exists(&app, &result_cyclonedx, StatusCode::NOT_FOUND).await?;
Ok(())
}
#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_deletion_failure(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
// Ingest a single old SBOM that we will force to fail on deletion
let (result_spdx, _) = ingest_test_sboms_with_old_date(ctx, 100).await?;
// Configure the SBOM service (or underlying storage) to fail deletion
//
// NOTE: this assumes a test-only hook on the SbomService that makes the next
// delete operation for the given id return an error. See <additional_changes>.
ctx.services()
.sbom_service()
.fail_next_delete(result_spdx.id.clone())
.await?;
let app = caller(ctx).await?;
// Trigger prune with parameters that should include our SBOM
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;
// We expect one failed deletion and no successful deletions for this scenario
assert_eq!(log.deleted_total, 0);
assert_eq!(log.failed_total, 1);
assert_eq!(log.failed_pruned.len(), 1);
let failed = &log.failed_pruned[0];
// The failed item should reference the SBOM we tried to delete
assert_eq!(failed.sbom_id, result_spdx.id);
// And an error message should be recorded
assert!(failed.error.is_some());
assert!(!failed.error.as_ref().unwrap().is_empty());
// Because deletion failed, the SBOM should still be present
verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;
Ok(())
}
```
.
ctx.services()
.sbom_service()
.fail_next_delete(result_spdx.id.clone())
.await?;
let app = caller(ctx).await?;
// Trigger prune with parameters that should include our SBOM
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;
// We expect one failed deletion and no successful deletions for this scenario
assert_eq!(log.deleted_total, 0);
assert_eq!(log.failed_total, 1);
assert_eq!(log.failed_pruned.len(), 1);
let failed = &log.failed_pruned[0];
// The failed item should reference the SBOM we tried to delete
assert_eq!(failed.sbom_id, result_spdx.id);
// And an error message should be recorded
assert!(failed.error.is_some());
assert!(!failed.error.as_ref().unwrap().is_empty());
// Because deletion failed, the SBOM should still be present
verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;
Ok(())
}
>>>>>>> REPLACE
</file_operation>
</file_operations>
<additional_changes>
1. Ensure `PrunedSbomLog` is imported into this test module if it is not already:
- `use crate::admin::endpoints::PrunedSbomLog;` or the correct path for the prune response type.
2. The helper `verify_sbom_exists` must already support checking for `StatusCode::NOT_FOUND` and `StatusCode::OK` as used above.
3. Implement the test hook `fail_next_delete` (or equivalent) on the SBOM deletion path:
- Add an async method like `fail_next_delete(&self, id: SbomId)` on your `SbomService` (or a dedicated test-only implementation) that records an id for which `delete_sbom` will return an error.
- In `delete_sbom` / `try_delete_sbom`, check this flag/id and return an `Err(anyhow!(...))` instead of performing deletion when matched.
- Ensure tests share the same `SbomService` instance used by `caller(ctx)` so the hook set in the test is visible when the HTTP handler invokes the service.
4. Adjust field names if `PrunedSbomLog` uses different ones (e.g. `failed` instead of `failed_pruned`, `total_failed` instead of `failed_total`, or `id` instead of `sbom_id`).
5. If `ctx.services().sbom_service()` is not the correct accessor chain, replace it with how the `SbomService` is retrieved from `TrustifyContext` in the rest of the codebase.
</issue_to_address>
### Comment 6
<location> `modules/fundamental/src/admin/endpoints/test.rs:51-60` </location>
<code_context>
+ Ok(())
+}
+
+#[test_context(TrustifyContext)]
+#[test(actix_web::test)]
+async fn test_prune_sboms_no_matches(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
+ // Ingest test SBOMs with old date (0 days ago)
+ let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 0).await?;
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a boundary test where the ingestion time is exactly on the cutoff
Because the pruning query uses `source_document::Column::Ingested.lt(cutoff_date)`, SBOMs ingested exactly `ingested` days ago should *not* be pruned. Current tests only cover `days_old < ingested` (should prune) and `days_old > ingested` (no prune). Please add a case where `days_old == ingested` (e.g., ingest with `days_old = 90` and call with `ingested = 90`) and assert those SBOMs are not pruned to lock in the boundary behavior and avoid off‑by‑one regressions.
</issue_to_address>
### Comment 7
<location> `modules/fundamental/src/admin/endpoints/test.rs:265-274` </location>
<code_context>
+// #[test_context(TrustifyContext)]
</code_context>
<issue_to_address>
**suggestion (testing):** Either enable and adapt the commented-out error field test or remove it
`test_prune_sboms_error_field_validation` is entirely commented out and, as written, expects both successful and failed items, which is unlikely with the current endpoint behavior (it probably only returns successful deletions). Please either:
- re-enable and adapt it to a setup that reliably produces both success and failure entries (e.g., via a failing `SbomService`), or
- remove the commented-out block to avoid dead tests and confusion about actual coverage.
Suggested implementation:
```rust
```
There are likely more commented lines belonging to `test_prune_sboms_error_field_validation` further down in the file (e.g., the rest of the function body and closing braces). Those should also be removed to fully delete the dead test. Search for the function name `test_prune_sboms_error_field_validation` and remove the entire commented-out block from its (commented) signature to the end of its body.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> { | ||
| // Ingest test SBOMs with old date (100 days ago) | ||
| let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?; | ||
|
|
||
| let app = caller(ctx).await?; | ||
|
|
||
| // Create test request with dry-run=false | ||
| let req = TestRequest::post() | ||
| .uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5") | ||
| .to_request(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Add a scenario where delete fails to exercise failed_pruned and error handling
The current integration tests only cover successful deletion or no-op cases, so the failure path of try_delete_sbom and the aggregation into PrunedSbomLog are untested. Please add a test that forces a deletion failure (e.g., via a stubbed/mocked SbomService or an undeletable PrunedSbom) and assert that failed_total is incremented, the item is included in failed_pruned, and its error field is populated.
Suggested implementation:
#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_actual_deletion(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
// Ingest test SBOMs with old date (100 days ago)
let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?;
let app = caller(ctx).await?;
// Create test request with dry-run=false
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
// Execute prune and deserialize response
let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;
// Verify that both SBOMs were actually deleted
assert_eq!(log.deleted_total, 2);
assert!(log.failed_pruned.is_empty());
assert_eq!(log.failed_total, 0);
// Verify SBOMs no longer exist
verify_sbom_exists(&app, &result_spdx, StatusCode::NOT_FOUND).await?;
verify_sbom_exists(&app, &result_cyclonedx, StatusCode::NOT_FOUND).await?;
Ok(())
}
#[test_context(TrustifyContext)]
#[test(actix_web::test)]
async fn test_prune_sboms_deletion_failure(ctx: &TrustifyContext) -> Result<(), anyhow::Error> {
// Ingest a single old SBOM that we will force to fail on deletion
let (result_spdx, _) = ingest_test_sboms_with_old_date(ctx, 100).await?;
// Configure the SBOM service (or underlying storage) to fail deletion
//
// NOTE: this assumes a test-only hook on the SbomService that makes the next
// delete operation for the given id return an error. See <additional_changes>.
ctx.services()
.sbom_service()
.fail_next_delete(result_spdx.id.clone())
.await?;
let app = caller(ctx).await?;
// Trigger prune with parameters that should include our SBOM
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;
// We expect one failed deletion and no successful deletions for this scenario
assert_eq!(log.deleted_total, 0);
assert_eq!(log.failed_total, 1);
assert_eq!(log.failed_pruned.len(), 1);
let failed = &log.failed_pruned[0];
// The failed item should reference the SBOM we tried to delete
assert_eq!(failed.sbom_id, result_spdx.id);
// And an error message should be recorded
assert!(failed.error.is_some());
assert!(!failed.error.as_ref().unwrap().is_empty());
// Because deletion failed, the SBOM should still be present
verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;
Ok(())
}.
ctx.services()
.sbom_service()
.fail_next_delete(result_spdx.id.clone())
.await?;
let app = caller(ctx).await?;
// Trigger prune with parameters that should include our SBOM
let req = TestRequest::post()
.uri("/api/v2/admin/sbom/prune?ingested=90&dry-run=false&batch-size=10&max-concurrent=5")
.to_request();
let log: PrunedSbomLog = test::call_and_read_body_json(&app, req).await;
// We expect one failed deletion and no successful deletions for this scenario
assert_eq!(log.deleted_total, 0);
assert_eq!(log.failed_total, 1);
assert_eq!(log.failed_pruned.len(), 1);
let failed = &log.failed_pruned[0];
// The failed item should reference the SBOM we tried to delete
assert_eq!(failed.sbom_id, result_spdx.id);
// And an error message should be recorded
assert!(failed.error.is_some());
assert!(!failed.error.as_ref().unwrap().is_empty());
// Because deletion failed, the SBOM should still be present
verify_sbom_exists(&app, &result_spdx, StatusCode::OK).await?;
Ok(())
}
REPLACE
</file_operation>
</file_operations>
<additional_changes>
- Ensure
PrunedSbomLogis imported into this test module if it is not already:use crate::admin::endpoints::PrunedSbomLog;or the correct path for the prune response type.
- The helper
verify_sbom_existsmust already support checking forStatusCode::NOT_FOUNDandStatusCode::OKas used above. - Implement the test hook
fail_next_delete(or equivalent) on the SBOM deletion path:- Add an async method like
fail_next_delete(&self, id: SbomId)on yourSbomService(or a dedicated test-only implementation) that records an id for whichdelete_sbomwill return an error. - In
delete_sbom/try_delete_sbom, check this flag/id and return anErr(anyhow!(...))instead of performing deletion when matched. - Ensure tests share the same
SbomServiceinstance used bycaller(ctx)so the hook set in the test is visible when the HTTP handler invokes the service.
- Add an async method like
- Adjust field names if
PrunedSbomLoguses different ones (e.g.failedinstead offailed_pruned,total_failedinstead offailed_total, oridinstead ofsbom_id). - If
ctx.services().sbom_service()is not the correct accessor chain, replace it with how theSbomServiceis retrieved fromTrustifyContextin the rest of the codebase.
| #[test_context(TrustifyContext)] | ||
| #[test(actix_web::test)] | ||
| async fn test_prune_sboms_dry_run(ctx: &TrustifyContext) -> Result<(), anyhow::Error> { | ||
| // Ingest test SBOMs with old date (100 days ago) | ||
| let (result_spdx, result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?; | ||
|
|
||
| let app = caller(ctx).await?; | ||
|
|
||
| // Create test request with dry-run=true | ||
| let req = TestRequest::post() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Consider adding a boundary test where the ingestion time is exactly on the cutoff
Because the pruning query uses source_document::Column::Ingested.lt(cutoff_date), SBOMs ingested exactly ingested days ago should not be pruned. Current tests only cover days_old < ingested (should prune) and days_old > ingested (no prune). Please add a case where days_old == ingested (e.g., ingest with days_old = 90 and call with ingested = 90) and assert those SBOMs are not pruned to lock in the boundary behavior and avoid off‑by‑one regressions.
| // #[test_context(TrustifyContext)] | ||
| // #[test(actix_web::test)] | ||
| // async fn test_prune_sboms_error_field_validation( | ||
| // ctx: &TrustifyContext, | ||
| // ) -> Result<(), anyhow::Error> { | ||
| // // Ingest test SBOMs with old date (100 days ago) | ||
| // let (_result_spdx, _result_cyclonedx) = ingest_test_sboms_with_old_date(ctx, 100).await?; | ||
|
|
||
| // let app = caller(ctx).await?; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Either enable and adapt the commented-out error field test or remove it
test_prune_sboms_error_field_validation is entirely commented out and, as written, expects both successful and failed items, which is unlikely with the current endpoint behavior (it probably only returns successful deletions). Please either:
- re-enable and adapt it to a setup that reliably produces both success and failure entries (e.g., via a failing
SbomService), or - remove the commented-out block to avoid dead tests and confusion about actual coverage.
Suggested implementation:
There are likely more commented lines belonging to test_prune_sboms_error_field_validation further down in the file (e.g., the rest of the function body and closing braces). Those should also be removed to fully delete the dead test. Search for the function name test_prune_sboms_error_field_validation and remove the entire commented-out block from its (commented) signature to the end of its body.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2190 +/- ##
==========================================
+ Coverage 68.24% 68.37% +0.13%
==========================================
Files 376 378 +2
Lines 21208 21336 +128
Branches 21208 21336 +128
==========================================
+ Hits 14473 14589 +116
- Misses 5868 5869 +1
- Partials 867 878 +11 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ad14515 to
7a54039
Compare
Summary by Sourcery
Introduce an admin API to prune SBOMs based on ingestion date with optional dry-run support and concurrent batched deletion, expose it via OpenAPI, and cover it with tests.
New Features:
Enhancements:
Tests: