Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: klauspost/compress
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v1.18.1
Choose a base ref
...
head repository: klauspost/compress
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v1.18.2
Choose a head ref
  • 4 commits
  • 9 files changed
  • 3 contributors

Commits on Oct 23, 2025

  1. Update changelog

    klauspost authored Oct 23, 2025
    Configuration menu
    Copy the full SHA
    e0b47ff View commit details
    Browse the repository at this point in the history

Commits on Oct 24, 2025

  1. flate: reduce stateless allocations (#1106)

    After updating GO to v1.24+, a sharp increase in CPU utilization was detected. Heap profile helped to reveal increased memory allocations by Write and Close methods of stateless gzip.Writer mode. This PR optimizes problem area by using sync.Pool and later allocation of tokens object.
    
    Benchmarks:
    
    BEFORE
    
    ```
    BenchmarkEncodeDigitsSL1e4-12              10141            115946 ns/op          86.25 MB/s      542379 B/op          3 allocs/op
    BenchmarkEncodeDigitsSL1e5-12               1602            730674 ns/op         136.86 MB/s      541377 B/op          2 allocs/op
    BenchmarkEncodeDigitsSL1e6-12                175           6851506 ns/op         145.95 MB/s      541542 B/op          2 allocs/op
    BenchmarkEncodeTwainSL1e4-12                9708            131564 ns/op          76.01 MB/s      542146 B/op          3 allocs/op
    BenchmarkEncodeTwainSL1e5-12                1663            684854 ns/op         146.02 MB/s      541463 B/op          2 allocs/op
    BenchmarkEncodeTwainSL1e6-12                 177           6435648 ns/op         155.38 MB/s      541654 B/op          2 allocs/op
    ```
    
    AFTER
    
    ```
    BenchmarkEncodeDigitsSL1e4-12              34747             33800 ns/op         295.86 MB/s           8 B/op          0 allocs/op
    BenchmarkEncodeDigitsSL1e5-12               1771            640723 ns/op         156.07 MB/s         160 B/op          0 allocs/op
    BenchmarkEncodeDigitsSL1e6-12                181           6759226 ns/op         147.95 MB/s        1573 B/op          0 allocs/op
    BenchmarkEncodeTwainSL1e4-12               35294             35304 ns/op         283.26 MB/s           8 B/op          0 allocs/op
    BenchmarkEncodeTwainSL1e5-12                1939            585755 ns/op         170.72 MB/s         146 B/op          0 allocs/op
    BenchmarkEncodeTwainSL1e6-12                 181           6505389 ns/op         153.72 MB/s        1573 B/op          0 allocs/op
    ```
    
    <!-- This is an auto-generated comment: release notes by coderabbit.ai -->
    ## Summary by CodeRabbit
    
    - **Refactor**
      - Optimized compression internals to reuse buffers via pooling, improving throughput and reducing memory use during repeated operations.
      - Enhances performance and consistency for both dictionary and non-dictionary compression paths across large blocks.
      - No changes to public APIs or user-facing behavior; workflows remain the same.
      - Users may see faster compression and lower memory footprint under sustained/high-volume workloads.
    <!-- end of auto-generated comment: release notes by coderabbit.ai -->
    RXamzin authored Oct 24, 2025
    Configuration menu
    Copy the full SHA
    701ca28 View commit details
    Browse the repository at this point in the history

Commits on Nov 2, 2025

  1. build(deps): bump github/codeql-action in the github-actions group (#…

    …1111)
    
    Bumps the github-actions group with 1 update: [github/codeql-action](https://github.com/github/codeql-action).
    
    
    Updates `github/codeql-action` from 3.30.5 to 4.31.2
    - [Release notes](https://github.com/github/codeql-action/releases)
    - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
    - [Commits](github/codeql-action@3599b3b...0499de3)
    
    ---
    updated-dependencies:
    - dependency-name: github/codeql-action
      dependency-version: 4.31.2
      dependency-type: direct:production
      update-type: version-update:semver-major
      dependency-group: github-actions
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Nov 2, 2025
    Configuration menu
    Copy the full SHA
    503c028 View commit details
    Browse the repository at this point in the history

Commits on Dec 1, 2025

  1. Fix invalid encoding on level 9 with single value input (#1115)

    * Fix invalid encoding on level 9 with single value input
    
    With single value input and a full block write (>=64K) the indexing function would overflow a uint16 to a 0.
    
    This would make it impossible to generate a valid huffman table for the literal size prediction.
    
    In turn this would mean that the entire block would be output as literals - since the cost of the value would be 0 bits.
    
    This would in turn mean that EOB could not be encoded for the bit writer - since there were no matches. This was previously being satisfied with "filling".
    
    Fixes:
    
    1. First never encode more than `maxFlateBlockTokens` - 32K for the literal estimate table.
    2. Always include EOB explicitly - if somehow literals should slip through.
    3. Add test that will write big single-value input as regression test. Others were using copy that does smaller writes.
    
    Fixes #1114
    
    * Retract v1.18.1
    klauspost authored Dec 1, 2025
    Configuration menu
    Copy the full SHA
    444d5d9 View commit details
    Browse the repository at this point in the history
Loading