Skip to content

HADOOP-19747. Switch to at.yawk.lz4:lz4-java:1.9.0 due to CVE-2025-12183#8116

Merged
steveloughran merged 1 commit intoapache:trunkfrom
pjfanning:HADOOP-19747-lz4
Dec 4, 2025
Merged

HADOOP-19747. Switch to at.yawk.lz4:lz4-java:1.9.0 due to CVE-2025-12183#8116
steveloughran merged 1 commit intoapache:trunkfrom
pjfanning:HADOOP-19747-lz4

Conversation

@pjfanning
Copy link
Member

Description of PR

HADOOP-19747

How was this patch tested?

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

In the long term, it is recommended to switch to .safeDecompressor(), which is not vulnerable and provides better performance (despite the name).

@pjfanning
Copy link
Member Author

@steveloughran I think Hadoop uses safe decompressor already

@steveloughran steveloughran merged commit 8602fe7 into apache:trunk Dec 4, 2025
1 of 2 checks passed
steveloughran pushed a commit that referenced this pull request Dec 4, 2025
 (#8116)

The hadoop decompressor org.apache.hadoop.io.compress.lz4.Lz4Compressor
instantiated a compressor via a call to

    LZ4Factory.fastestInstance().safeDecompressor()

and so is not directly vulnerable to CVE-2025-12183.

see https://sites.google.com/sonatype.com/vulnerabilities/cve-2025-12183

Contributed by PJ Fanning
@pjfanning pjfanning deleted the HADOOP-19747-lz4 branch December 4, 2025 15:08
@pjfanning
Copy link
Member Author

@steveloughran do you think it is ok to backport this to the 3.4 branch?

@pan3793
Copy link
Member

pan3793 commented Dec 16, 2025

Just FYI, lz4 is famous for its ultra-fast speed, the upgrade is not free, my test shows it has perf impact

steveloughran pushed a commit that referenced this pull request Jan 5, 2026
 (#8116)

The hadoop decompressor org.apache.hadoop.io.compress.lz4.Lz4Compressor
instantiated a compressor via a call to

    LZ4Factory.fastestInstance().safeDecompressor()

and so is not directly vulnerable to CVE-2025-12183.

see https://sites.google.com/sonatype.com/vulnerabilities/cve-2025-12183

Contributed by PJ Fanning
asf-gitbox-commits pushed a commit that referenced this pull request Jan 5, 2026
 (#8116)

The hadoop decompressor org.apache.hadoop.io.compress.lz4.Lz4Compressor
instantiated a compressor via a call to

    LZ4Factory.fastestInstance().safeDecompressor()

and so is not directly vulnerable to CVE-2025-12183.

see https://sites.google.com/sonatype.com/vulnerabilities/cve-2025-12183

Contributed by PJ Fanning
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants