[release_v2190] release notes template#3731
Merged
AlexanderDokuchaev merged 20 commits intoopenvinotoolkit:release_v2190from Nov 28, 2025
Merged
[release_v2190] release notes template#3731AlexanderDokuchaev merged 20 commits intoopenvinotoolkit:release_v2190from
AlexanderDokuchaev merged 20 commits intoopenvinotoolkit:release_v2190from
Conversation
Collaborator
|
@l-bat , please help with the list of new notebooks with NNCF. I noticed the following only: |
l-bat
approved these changes
Nov 11, 2025
ReleaseNotes.md
Outdated
| - General: | ||
| - ... | ||
| - Features: | ||
| - The histogram aggregator was introduced, improving metrics for a number of classification models with PTQ. |
ReleaseNotes.md
Outdated
| Post-training Quantization: | ||
|
|
||
| - Breaking changes: | ||
| - (OpenVINO) `nncf.CompressWeightsMode.E2M1` `mode` option is renamed to `nncf.CompressWeightsMode.MXFP4`. |
| - ... | ||
| - Features: | ||
| - The histogram aggregator was introduced, improving metrics for a number of classification models with PTQ. | ||
| - (OpenVINO) Introduced several new compression modes in `nncf.CompressWeightsMode`: `MXFP8`, `FP8`, and `FP4`. These can be used as the `mode` option in `nncf.compress_weights()` to apply the corresponding MXFP8, FP8, or FP4 precisions (experimental). |
| - Fixes: | ||
| - ... | ||
| - Improvements: | ||
| - Maximum memory consumption during statistic collection has been reduced by releasing model output memory before the next statistic collection inference call. |
daniil-lyakhov
approved these changes
Nov 11, 2025
ljaljushkin
approved these changes
Nov 11, 2025
ReleaseNotes.md
Outdated
| - Features: | ||
| - The histogram aggregator was introduced, improving metrics for a number of classification models with PTQ. | ||
| - (OpenVINO) Introduced several new compression modes in `nncf.CompressWeightsMode`: `MXFP8`, `FP8`, and `FP4`. These can be used as the `mode` option in `nncf.compress_weights()` to apply the corresponding MXFP8, FP8, or FP4 precisions (experimental). | ||
| - Now weight compression biwidth distribution table also displays group size value for each of the compression data type. |
| - Known issues: | ||
| - ... | ||
| - Other: | ||
| - Refined the handling of layers that don't have channel size divisible by group size during weight compression. Now the default behavior in such case is that an error will be raised and in the error message users are suggested to provide a different group size value or use `GroupSizeFallbackMode.ADJUST` to automatically adjust group size for problematic layers. |
| - (OpenVINO) Introduced several new compression modes in `nncf.CompressWeightsMode`: `MXFP8`, `FP8`, and `FP4`. These can be used as the `mode` option in `nncf.compress_weights()` to apply the corresponding MXFP8, FP8, or FP4 precisions (experimental). | ||
| - Now weight compression biwidth distribution table also displays group size value for each of the compression data type. | ||
| - Fixes: | ||
| - Added an ignored pattern for position embedding layer in Segment Anything model. |
| - Added an ignored pattern for position embedding layer in Segment Anything model. | ||
| - Improvements: | ||
| - Maximum memory consumption during statistic collection has been reduced by releasing model output memory before the next statistic collection inference call. | ||
| - Reduced peak memory footprint for Bias Correction algorithm. |
| - Improvements: | ||
| - Maximum memory consumption during statistic collection has been reduced by releasing model output memory before the next statistic collection inference call. | ||
| - Reduced peak memory footprint for Bias Correction algorithm. | ||
| - (OpenVINO) Reduced time (by up to 3x) and memory (by up to 1.5x) it takes to compress models to `MXFP4` data type. |
nikita-savelyevv
approved these changes
Nov 12, 2025
anzr299
approved these changes
Nov 12, 2025
andrey-churkin
approved these changes
Nov 15, 2025
| - The histogram aggregator was introduced, improving metrics for a number of classification models with PTQ. | ||
| - (OpenVINO) Introduced several new compression modes in `nncf.CompressWeightsMode`: `MXFP8`, `FP8`, and `FP4`. These can be used as the `mode` option in `nncf.compress_weights()` to apply the corresponding MXFP8, FP8, or FP4 precisions (experimental). | ||
| - Now weight compression biwidth distribution table also displays group size value for each of the compression data type. | ||
| - (ONNX) Support for the SmoothQuant algorithm has been added to the ONNX backend for INT8 quantization. |
ReleaseNotes.md
Outdated
| - (OpenVINO) Introduced several new compression modes in `nncf.CompressWeightsMode`: `MXFP8`, `FP8`, and `FP4`. These can be used as the `mode` option in `nncf.compress_weights()` to apply the corresponding MXFP8, FP8, or FP4 precisions (experimental). | ||
| - Now weight compression biwidth distribution table also displays group size value for each of the compression data type. | ||
| - (ONNX) Support for the SmoothQuant algorithm has been added to the ONNX backend for INT8 quantization. | ||
| - (ONNX) A new transformation has been added to optimize models by folding `QuantizeLinear` nodes with constant inputs into precomputed, quantized initializers. This behavior is controlled by the `COMPRESS_WEIGHTS` backend parameter, which is now enabled (`True`) by default. |
| - Now weight compression biwidth distribution table also displays group size value for each of the compression data type. | ||
| - (ONNX) Support for the SmoothQuant algorithm has been added to the ONNX backend for INT8 quantization. | ||
| - (ONNX) A new transformation has been added to optimize models by folding `QuantizeLinear` nodes with constant inputs into precomputed, quantized initializers. This behavior is controlled by the `COMPRESS_WEIGHTS` backend parameter, which is now enabled (`True`) by default. | ||
| - (ONNX) Support has been added for applying the Fast Bias/Bias Correction algorithm to `MatMul` + `Add` subgraphs where one of the inputs to the `Add` operation is a constant. Previously, these cases were skipped because the `MatMul` operation was not recognized as having a bias, preventing the algorithm from being applied. |
| - (ONNX) Support has been added for applying the Fast Bias/Bias Correction algorithm to `MatMul` + `Add` subgraphs where one of the inputs to the `Add` operation is a constant. Previously, these cases were skipped because the `MatMul` operation was not recognized as having a bias, preventing the algorithm from being applied. | ||
| - Fixes: | ||
| - Added an ignored pattern for position embedding layer in Segment Anything model. | ||
| - (ONNX) Fixed incorrect input handling for the `MatMulNBits` operation that previously caused graph breaks. |
| - Fixes: | ||
| - Added an ignored pattern for position embedding layer in Segment Anything model. | ||
| - (ONNX) Fixed incorrect input handling for the `MatMulNBits` operation that previously caused graph breaks. | ||
| - (ONNX) Resolved an issue with INT4 weight compression in the `Gemm` operation when `transB=1`. |
andreyanufr
approved these changes
Nov 21, 2025
ReleaseNotes.md
Outdated
| - Added an ignored pattern for position embedding layer in Segment Anything model. | ||
| - (ONNX) Fixed incorrect input handling for the `MatMulNBits` operation that previously caused graph breaks. | ||
| - (ONNX) Resolved an issue with INT4 weight compression in the `Gemm` operation when `transB=1`. | ||
| - Fixed a typo in the `_get_smooth_quant_param_grid()` method. |
Histogram Aggregator description update
MaximProshin
approved these changes
Nov 26, 2025
Collaborator
MaximProshin
left a comment
There was a problem hiding this comment.
LGTM
@AlexanderDokuchaev , feel free to merge it
89c6a9e
into
openvinotoolkit:release_v2190
9 checks passed
AlexanderDokuchaev
added a commit
to AlexanderDokuchaev/nncf
that referenced
this pull request
Dec 1, 2025
### Reason for changes Upcoming release ### Related tickets 176350 --------- Co-authored-by: Liubov Talamanova <liubov.talamanova@intel.com> Co-authored-by: Daniil Lyakhov <daniil.lyakhov@intel.com> Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com> Co-authored-by: Andrey Churkin <andrey.churkin@intel.com> Co-authored-by: Maksim Proshin <maksim.proshin@intel.com> Co-authored-by: Lyalyushkin Nikolay <nikolay.lyalyushkin@intel.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Reason for changes
Upcoming release
Related tickets
176350