Skip to content

Conversation

elizabethhealy
Copy link
Member

@elizabethhealy elizabethhealy commented Jun 23, 2025

Proposed Changes

Add cleanup for underlying Ristretto cache by tracking and explicitly closing it to avoid goroutine leaks and ensure proper resource release.

Details:
Modified cache.Manager to store a reference to the underlying *ristretto.Cache.
Added a Close() method to Manager that calls .Close() on the underlying Ristretto cache.

Why:
Ristretto starts background goroutines and allocates internal buffers that aren't automatically cleaned up. Explicitly calling .Close() ensures proper shutdown, especially important in tests and long-running services

Checklist

  • I have added or updated unit tests
  • I have added or updated integration tests (if appropriate)
  • I have added or updated documentation

Testing Instructions

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @elizabethhealy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a critical cleanup mechanism for the application's caching layer. By adding a Close() method to the cache manager and ensuring its invocation during application shutdown, it addresses potential resource leaks associated with the underlying Ristretto cache, promoting more robust and efficient resource management.

Highlights

  • Cache Manager Enhancement: The Manager struct in the cache package now includes a direct reference (raw *ristretto.Cache) to the underlying Ristretto cache instance. This allows for explicit management of the Ristretto cache's lifecycle.
  • Resource Cleanup Method: A new Close() method has been added to the Manager struct. This method is responsible for calling the Close() method on the underlying Ristretto cache, which is crucial for releasing background goroutines and internal buffers.
  • Application Shutdown Integration: The Close() method for the cache manager is now deferred in the main Start function of the server. This ensures that the cache resources are properly cleaned up when the service shuts down, preventing potential resource leaks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Cache holds data fast, Close the door, let threads all rest, No leaks, peace of mind.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@elizabethhealy elizabethhealy changed the title feat(service): Add Close() method to cache manager feat(main): Add Close() method to cache manager Jun 23, 2025
@elizabethhealy elizabethhealy marked this pull request as ready for review June 23, 2025 15:38
@elizabethhealy elizabethhealy requested a review from a team as a code owner June 23, 2025 15:38
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Close() method to the cache.Manager to ensure proper cleanup of the underlying Ristretto cache, preventing resource leaks. The changes are well-contained and address an important aspect of resource management. I've identified a high-severity issue in service/pkg/server/start.go regarding the order of defer calls, which could lead to errors during application shutdown, and a medium-severity issue in service/pkg/cache/cache.go concerning inconsistent receiver naming, which affects code maintainability.

Copy link
Contributor

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 521.979194ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 316.621726ms

Standard Benchmark Metrics Skipped or Failed

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 356.622318ms
Throughput 280.41 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 36.688838657s
Average Latency 365.323867ms
Throughput 136.28 requests/second

NANOTDF Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 25.881567306s
Average Latency 257.5756ms
Throughput 193.19 requests/second

Copy link
Contributor

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 542.762106ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 334.156694ms

Standard Benchmark Metrics Skipped or Failed

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 515.750683ms
Throughput 193.89 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 37.477286296s
Average Latency 372.670769ms
Throughput 133.41 requests/second

NANOTDF Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 26.271104982s
Average Latency 261.447022ms
Throughput 190.32 requests/second

Copy link
Contributor

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 509.971171ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 330.66438ms

Standard Benchmark Metrics Skipped or Failed

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 337.333129ms
Throughput 296.44 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 35.844994201s
Average Latency 356.109941ms
Throughput 139.49 requests/second

NANOTDF Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 25.452373398s
Average Latency 253.522046ms
Throughput 196.45 requests/second

Copy link
Contributor

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 545.605102ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 5000
Denied Decision Requests 0
Total Time 341.323658ms

Standard Benchmark Metrics Skipped or Failed

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 350.424067ms
Throughput 285.37 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 38.698497778s
Average Latency 385.494465ms
Throughput 129.20 requests/second

NANOTDF Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 26.271177439s
Average Latency 261.872553ms
Throughput 190.32 requests/second

@elizabethhealy elizabethhealy added this pull request to the merge queue Jun 23, 2025
Merged via the queue into main with commit 32630d6 Jun 23, 2025
29 checks passed
@elizabethhealy elizabethhealy deleted the add-close-to-cache branch June 23, 2025 16:44
jakedoublev pushed a commit that referenced this pull request Jun 23, 2025
### Proposed Changes

Add cleanup for underlying Ristretto cache by tracking and explicitly
closing it to avoid goroutine leaks and ensure proper resource release.

Details:
Modified cache.Manager to store a reference to the underlying
*ristretto.Cache.
Added a Close() method to Manager that calls .Close() on the underlying
Ristretto cache.

Why:
Ristretto starts background goroutines and allocates internal buffers
that aren't automatically cleaned up. Explicitly calling .Close()
ensures proper shutdown, especially important in tests and long-running
services

### Checklist

- [ ] I have added or updated unit tests
- [ ] I have added or updated integration tests (if appropriate)
- [ ] I have added or updated documentation

### Testing Instructions
github-merge-queue bot pushed a commit that referenced this pull request Jun 24, 2025
🤖 I have created a release *beep* *boop*
---


##
[0.7.0](service/v0.6.0...service/v0.7.0)
(2025-06-24)


### ⚠ BREAKING CHANGES

* **policy:** disable kas grants in favor of key mappings
([#2220](#2220))

### Features

* **authz:** Add caching to keycloak ERS
([#2466](#2466))
([f5b0a06](f5b0a06))
* **authz:** auth svc registered resource GetDecision support
([#2392](#2392))
([5405674](5405674))
* **authz:** authz v2 GetBulkDecision
([#2448](#2448))
([0da3363](0da3363))
* **authz:** cache entitlement policy within authorization service
([#2457](#2457))
([c16361c](c16361c))
* **authz:** ensure logging parity between authz v2 and v1
([#2443](#2443))
([ef68586](ef68586))
* **core:** add cache manager
([#2449](#2449))
([2b062c5](2b062c5))
* **core:** consume RPC interceptor request context metadata in logging
([#2442](#2442))
([2769c48](2769c48))
* **core:** DSPX-609 - add cli-client to keycloak provisioning
([#2396](#2396))
([48e7489](48e7489))
* **core:** ERS cache setup, fix cache initialization
([#2458](#2458))
([d0c6938](d0c6938))
* inject logger and cache manager to key managers
([#2461](#2461))
([9292162](9292162))
* **kas:** expose provider config from key details.
([#2459](#2459))
([0e7d39a](0e7d39a))
* **main:** Add Close() method to cache manager
([#2465](#2465))
([32630d6](32630d6))
* **policy:** disable kas grants in favor of key mappings
([#2220](#2220))
([30f8cf5](30f8cf5))
* **policy:** Restrict deletion of pc with used key.
([#2414](#2414))
([3b40a46](3b40a46))
* **sdk:** allow Connect-Protocol-Version RPC header for cors
([#2437](#2437))
([4bf241e](4bf241e))


### Bug Fixes

* **core:** remove generics on new platform cache manager and client
([#2456](#2456))
([98c3c16](98c3c16))
* **core:** replace opentdf-public client with cli-client
([#2422](#2422))
([fb18525](fb18525))
* **deps:** bump github.com/casbin/casbin/v2 from 2.106.0 to 2.107.0 in
/service in the external group
([#2416](#2416))
([43afd48](43afd48))
* **deps:** bump github.com/opentdf/platform/protocol/go from 0.4.0 to
0.5.0 in /service
([#2470](#2470))
([3a73fc9](3a73fc9))
* **deps:** bump github.com/opentdf/platform/sdk from 0.4.7 to 0.5.0 in
/service ([#2473](#2473))
([ad37476](ad37476))
* **deps:** bump the external group across 1 directory with 2 updates
([#2450](#2450))
([9d8d1f1](9d8d1f1))
* **deps:** bump the external group across 1 directory with 2 updates
([#2472](#2472))
([d45b3c8](d45b3c8))
* only request a token when near expiration
([#2370](#2370))
([556d95e](556d95e))
* **policy:** fix casing bug and get provider config on update.
([#2403](#2403))
([a52b8f9](a52b8f9))
* **policy:** properly formatted pem in test fixtures
([#2409](#2409))
([54ffd23](54ffd23))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: opentdf-automation[bot] <149537512+opentdf-automation[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants