Skip to content

Reduce excessive off-heap memor caused by occasional large key #3238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

yeyinglang
Copy link

@yeyinglang yeyinglang commented Apr 1, 2025

Closes #3237

  1. When the readBuffer is insufficient, a temporary large buffer will be created.
  2. Once the current command processing is completed, the temporary large buffer will be released, and the readBuffer will be restored.
  3. Even in the case of an exception, if the large object is not released during this processing, it will still be released during the next request.

Make sure that:

  • You have read the contribution guidelines.
  • You have created a feature request first to discuss your contribution intent. Please reference the feature request ticket number in the pull request.
  • You applied code formatting rules using the mvn formatter:format target. Don’t submit any formatting related changes.
  • You submit test cases (unit or integration tests) that back your changes.

@yeyinglang
Copy link
Author

yeyinglang commented Apr 1, 2025

BenchMark Setup

JMH version: 1.21
VM version: JDK 1.8.0_442, OpenJDK 64-Bit Server VM, 25.442-b06
OS: Mac OS Sequoia 15.1
Arch: Apple M3 Pro
VM invoker: /Library/Java/JavaVirtualMachines/temurin-8.jdk/Contents/Home/jre/bin/java
Warmup: 5 iterations, 10 s each
Measurement: 5 iterations, 10 s each
Timeout: 2 s per iteration,
Threads: 1 thread, will synchronize iterations
Benchmark mode: Average time, time/op

Before

Benchmark Mode Cnt Score Error Units
CommandHandlerBenchmark.measureNettyWriteAndRead avgt 5 187.513 ± 18.123 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch1 avgt 5 139.423 ± 3.800 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch10 avgt 5 1229.034 ± 126.325 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch100 avgt 5 14325.850 ± 568.049 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch1000 avgt 5 134176.816 ± 4066.024 ns/op

After

Benchmark Mode Cnt Score Error Units
CommandHandlerBenchmark.measureNettyWriteAndRead avgt 5 179.508 ± 27.306 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch1 avgt 5 142.241 ± 17.666 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch10 avgt 5 1359.862 ± 59.279 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch100 avgt 5 13541.736 ± 721.837 ns/op
CommandHandlerBenchmark.measureNettyWriteAndReadBatch1000 avgt 5 138272.823 ± 8509.574 ns/op

@yeyinglang yeyinglang force-pushed the opti_largekey branch 3 times, most recently from b478550 to e8c6aae Compare April 5, 2025 13:16
@tishun tishun added this to the 6.7.0.RELEASE milestone May 28, 2025
@tishun tishun added the type: improvement An improvement to the existing implementation label May 28, 2025
@tishun tishun removed this from the 6.7.0.RELEASE milestone May 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: waiting-for-triage type: improvement An improvement to the existing implementation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimize occasional large keys causing excessive off-heap memory usage
2 participants