Skip to content

Releases: intel/intel-extension-for-pytorch

Intel® Extension for PyTorch* v2.5.10+xpu Release Notes

20 Dec 03:41
2bca097
Compare
Choose a tag to compare

2.5.10+xpu

We are excited to announce the release of Intel® Extension for PyTorch* v2.5.10+xpu. This is the new release which supports Intel® GPU platforms (Intel® Data Center GPU Max Series, Intel® Arc™ Graphics family, Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics and Intel® Data Center GPU Flex Series) based on PyTorch* 2.5.1.

Highlights

  • Intel® oneDNN v3.6 integration

  • Intel® oneAPI Base Toolkit 2025.0.1 compatibility

  • Intel® Arc™ B-series Graphics support on Windows (prototype)

  • Large Language Model (LLM) optimization

    Intel® Extension for PyTorch* enhances KV Cache management to cover both Dynamic Cache and Static Cache methods defined by Hugging Face, which helps reduce computation time and improve response rates so as to optimize the performance of models in various generative tasks. Intel® Extension for PyTorch* also supports new LLM features including speculative decoding which optimizes inference by making educated guesses about future tokens while generating the current token, sliding window attention which uses a fixed-size window to limit the attention span of each token thus significantly improves processing speed and efficiency for long documents, and multi-round conversations for supporting a natural human conversation where information is exchanged in multiple turns back and forth.

    Besides that, Intel® Extension for PyTorch* optimizes more LLM models for inference and finetuning. A full list of optimized models can be found at LLM Optimizations Overview.

  • Serving framework support

    Typical LLM serving frameworks including vLLM and TGI can co-work with Intel® Extension for PyTorch* on Intel® GPU platforms on Linux (intensively verified on Intel® Data Center GPU Max Series). The support to low precision such as INT4 Weight Only Quantization, which is based on Generalized Post-Training Quantization (GPTQ) algorithm, is enhanced in this release.

  • Beta support of full fine-tuning and LoRA PEFT with mixed precision

    Intel® Extension for PyTorch* enhances this feature for optimizing typical LLM models and makes it reach Beta quality.

  • Kineto Profiler Support

    Intel® Extension for PyTorch* removes this redundant feature as the support of Kineto Profiler based on PTI on Intel® GPU platforms is available in PyTorch* 2.5.

  • Hybrid ATen operator implementation

    Intel® Extension for PyTorch* uses ATen operators available in Torch XPU Operators as much as possible and overrides very limited operators for better performance and broad data type support.

Breaking Changes

  • Block format support: oneDNN Block format integration support has been removed since v2.5.10+xpu.

Known Issues

Please refer to Known Issues webpage.

Intel® Extension for PyTorch* v2.5.0+cpu Release Notes

05 Nov 03:50
6973d57
Compare
Choose a tag to compare

We are excited to announce the release of Intel® Extension for PyTorch* 2.5.0+cpu which accompanies PyTorch 2.5. This release mainly brings you the support for Llama3.2, optimization on newly launched Intel® Xeon® 6 P-core platform, GPTQ/AWQ format support, and latest optimization to push better performance for LLM models. This release also includes a set of bug fixing and small optimizations. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Llama 3.2 support
    Meta has newly released Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B). Intel® Extension for PyTorch* provides support of Llama 3.2 since its launch date with early release version, and now support with this official release.
  • Optimization for Intel® Xeon® 6
    Intel® Xeon® 6 deliver new degrees of performance with more cores, a choice of microarchitecture, additional memory bandwidth, and exceptional input/output (I/O) across a range of workloads. Intel® Extension for PyTorch* provides dedicated optimization on this new processor family for features like Multiplexed Rank DIMM (MRDIMM), SNC=3 scenario, etc..
  • Large Language Model (LLM) optimization:
    Intel® Extension for PyTorch* provides more feature support of the weight only quantization including GPTQ/AWQ format support, symmetric quantization of activation and weight, and added chunked prefill/prefix prefill support in LLM module API, etc.. These features enable better adoption of community model weight and provides better performance for low-precision scenarios. This release also extended the optimized models to include newly published Llama 3.2 vision models. A full list of optimized models can be found at LLM optimization.
  • Bug fixing and other optimization
    • Optimized the performance of the IndirectAccessKVCacheAttention kernel
      #3185 #3209 #3214 #3218 #3248
    • Fixed the Segmentation fault in the IndirectAccessKVCacheAttention kernel #3246
    • Fixed the correctness issue in the PagedAttention kernel for Llama-68M-Chat-v1 #3307
    • Fixed the support in ipex.llm.optimize to ensure model.generate returns the correct output type when return_dict_in_generate is set to True. #3333
    • Optimized the performance of the Flash Attention kernel #3291
    • Upgraded oneDNN to v3.6 #3305

Intel® Extension for PyTorch* v2.3.110+xpu Release Notes

10 Sep 11:28
a28fe4f
Compare
Choose a tag to compare

2.3.110+xpu

We are excited to announce the release of Intel® Extension for PyTorch* v2.3.110+xpu. This is the new release which supports Intel® GPU platforms (Intel® Data Center GPU Max Series, Intel® Arc™ A-Series Graphics, Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics and Intel® Data Center GPU Flex Series) based on PyTorch* 2.3.1.

Highlights

  • oneDNN v3.5.3 integration

  • Intel® oneAPI Base Toolkit 2024.2.1 compatibility

  • Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics support on Windows (Prototype)

  • Large Language Model (LLM) optimization

    Intel® Extension for PyTorch* provides a new dedicated module, ipex.llm, to host for Large Language Models (LLMs) specific APIs. With ipex.llm, Intel® Extension for PyTorch* provides comprehensive LLM optimization on FP16 and INT4 datatypes. Specifically for low precision, Weight-Only Quantization is supported for various scenarios. And user can also run Intel® Extension for PyTorch* with Tensor Parallel to fit in the multiple ranks or multiple nodes scenarios to get even better performance.

    A typical API under this new module is ipex.llm.optimize, which is designed to optimize transformer-based models within frontend Python modules, with a particular focus on Large Language Models (LLMs). It provides optimizations for both model-wise and content-generation-wise. ipex.llm.optimize is an upgrade API to replace previous ipex.optimize_transformers, which will bring you more consistent LLM experience and performance. Below shows a simple example of ipex.llm.optimize for fp16 inference:

      import torch
      import intel_extension_for_pytorch as ipex
      import transformers
    
      model= transformers.AutoModelForCausalLM.from_pretrained(model_name_or_path).eval()
    
      dtype = torch.float16
      model = ipex.llm.optimize(model, dtype=dtype, device="xpu")
    
      model.generate(YOUR_GENERATION_PARAMS)

    More examples of this API can be found at LLM optimization API.

    Besides that, we optimized more LLM inference models. A full list of optimized models can be found at LLM Optimizations Overview.

  • Serving framework support

    Typical LLM serving frameworks including vLLM and TGI can co-work with Intel® Extension for PyTorch* on Intel® GPU platforms (Intel® Data Center GPU Max Series and Intel® Arc™ A-Series Graphics). Besides the integration of LLM serving frameworks with ipex.llm module level APIs, we enhanced the performance and quality of underneath Intel® Extension for PyTorch* operators such as paged attention and flash attention for better end to end model performance.

  • Prototype support of full fine-tuning and LoRA PEFT with mixed precision

    Intel® Extension for PyTorch* also provides new capability for supporting popular recipes with both full fine-tuning and LoRA PEFT for mixed precision with BF16 and FP32. We optimized many typical LLM models including Llama 2 (7B and 70B), Llama 3 8B, Phi-3-Mini 3.8B model families and Chinese model Qwen-7B, on both single GPU and Multi-GPU (distributed fine-tuning based on PyTorch FSDP) use cases.

Breaking Changes

  • Block format support: oneDNN Block format integration support is being deprecated and will no longer be available starting from the release after v2.3.110+xpu.

Known Issues

Please refer to Known Issues webpage.

Intel® Extension for PyTorch* v2.4.0+cpu Release Notes

14 Aug 22:19
5d871a0
Compare
Choose a tag to compare

We are excited to announce the release of Intel® Extension for PyTorch* 2.4.0+cpu which accompanies PyTorch 2.4. This release mainly brings you the support for Llama3.1, basic support for LLM serving frameworks like vLLM/TGI, and a set of optimization to push better performance for LLM models. This release also extends the list of optimized LLM models to a broader level and includes a set of bug fixing and small optimizations. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Llama 3.1 support

Meta has newly released Llama 3.1 with new features like longer context length (128K) support. Intel® Extension for PyTorch* provides support of Llama 3.1 since its launch date with early release version, and now support with this official release.

  • Serving framework support

Typical LLM serving frameworks including vLLM, TGI can co-work with Intel® Extension for PyTorch* now which provides optimized performance for Xeon® Scalable CPUs. Besides the integration of LLM serving frameworks with ipex.llm module level APIs, we also continue optimizing the performance and quality of underneath Intel® Extension for PyTorch* operators such as paged attention and flash attention. We also provide new support in ipex.llm module level APIs for 4bits AWQ quantization based on weight only quantization, and distributed communications with shared memory optimization.

  • Large Language Model (LLM) optimization:

Intel® Extension for PyTorch* further optimized the performance of the weight only quantization kernels, enabled more fusion pattern variants for LLMs and extended the optimized models to include whisper, falcon-11b, Qwen2, and definitely Llama 3.1, etc. A full list of optimized models can be found at LLM optimization.

  • Bug fixing and other optimization

    • Fixed the quantization with auto-mixed-precision (AMP) mode of Qwen-7b #3030

    • Fixed the illegal memory access issue in the Flash Attention kernel #2987

    • Re-structured the paths of LLM example scripts #3080

    • Upgraded oneDNN to v3.5.3 #3160

    • Misc fix and enhancement #3079 #3116

Full Changelog: v2.3.0+cpu...v2.4.0+cpu

Intel® Extension for PyTorch* v2.1.40+xpu Release Notes

12 Aug 02:58
6c9c6a1
Compare
Choose a tag to compare

2.1.40+xpu

We are excited to announce the release of Intel® Extension for PyTorch* v2.1.40+xpu. This is a minor release which supports Intel® GPU platforms (Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series, Intel® Arc™ A-Series Graphics and Intel® Core™ Ultra Processors with Intel® Arc™ Graphics) based on PyTorch* 2.1.0.

Highlights

  • Intel® oneAPI Base Toolkit 2024.2.1 compatibility
  • Intel® oneDNN v3.5 integration
  • Intel® oneCCL 2021.13.1 integration
  • Intel® Core™ Ultra Processors with Intel® Arc™ Graphics (MTL-H) support on Windows (Prototype)
  • Bug fixing and other optimization
    • Fix host memory leak #4280
    • Fix LayerNorm issue for undefined grad_input #4317
    • Replace FP64 device check method #4354
    • Fix online doc search issue #4358
    • Fix pdist unit test failure on client GPUs #4361
    • Remove primitive cache from conv fwd #4429
    • Fix sdp bwd page fault with no grad bias #4439
    • Fix implicit data conversion #4463
    • Fix compiler version parsing issue #4468
    • Fix irfft invalid descriptor #4480
    • Change condition order to fix out-of-bound access in index #4495
    • Add parameter check in embedding bag #4504
    • Add the backward implementation for rms norm #4527
    • Fix attn_mask for sdpa beam_search #4557
    • Use data_ptr template instead of force data conversion #4558
    • Workaround windows AOT image size over 2GB issue on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics #4407 #4450

Known Issues

Please refer to Known Issues webpage.

Intel® Extension for PyTorch* v2.3.100+cpu Release Notes

26 Jun 08:56
f92dcd4
Compare
Choose a tag to compare

Highlights

  • Added the optimization for Phi-3: #2883

  • Fixed the state_dict method patched by ipex.optimize to support DistributedDataParallel #2910

  • Fixed the linking issue in CPPSDK #2911

  • Fixed the ROPE kernel for cases where the batch size is larger than one #2928

  • Upgraded deepspeed to v0.14.3 to include the support for Phi-3 #2985

Full Changelog: v2.3.0+cpu...v2.3.100+cpu

Intel® Extension for PyTorch* v2.3.0+cpu Release Notes

13 May 01:42
d3c5244
Compare
Choose a tag to compare

We are excited to announce the release of Intel® Extension for PyTorch* 2.3.0+cpu which accompanies PyTorch 2.3. This release mainly brings you the new feature on Large Language Model (LLM) called module level LLM optimization API, which provides module level optimizations for commonly used LLM modules and functionalities, and targets to optimize customized LLM modeling for scenarios like private models, self-customized models, LLM serving frameworks, etc. This release also extends the list of optimized LLM models to a broader level and includes a set of bug fixing and small optimizations. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Large Language Model (LLM) optimization

    Intel® Extension for PyTorch* provides a new feature called module level LLM optimization API, which provides module level optimizations for commonly used LLM modules and functionalities. LLM creators can then use this new API set to replace related parts in models by themselves, with which to reach peak performance.

    There are 3 categories of module level LLM optimization APIs in general:

    • Linear post-op APIs
    # using module init and forward
    ipex.llm.modules.linearMul
    ipex.llm.modules.linearGelu
    ipex.llm.modules.linearNewGelu
    ipex.llm.modules.linearAdd
    ipex.llm.modules.linearAddAdd
    ipex.llm.modules.linearSilu
    ipex.llm.modules.linearSiluMul
    ipex.llm.modules.linear2SiluMul
    ipex.llm.modules.linearRelu
    • Attention related APIs
    # using module init and forward
    ipex.llm.modules.RotaryEmbedding
    ipex.llm.modules.RMSNorm
    ipex.llm.modules.FastLayerNorm
    ipex.llm.modules.VarlenAttention
    ipex.llm.modules.PagedAttention
    ipex.llm.modules.IndirectAccessKVCacheAttention
    
    # using as functions
    ipex.llm.functional.rotary_embedding
    ipex.llm.functional.rms_norm
    ipex.llm.functional.fast_layer_norm
    ipex.llm.functional.indirect_access_kv_cache_attention
    ipex.llm.functional.varlen_attention
    • Generation related APIs
    # using for optimizing huggingface generation APIs with prompt sharing
    ipex.llm.generation.hf_beam_sample
    ipex.llm.generation.hf_beam_search
    ipex.llm.generation.hf_greedy_search
    ipex.llm.generation.hf_sample

    More detailed introduction on how to apply this API set and example code walking you through can be found here.

  • Bug fixing and other optimization

Full Changelog: v2.2.0+cpu...v2.3.0+cpu

Intel® Extension for PyTorch* v2.1.30+xpu Release Notes

30 Apr 12:54
8bd78d6
Compare
Choose a tag to compare

2.1.30+xpu

We are excited to announce the release of Intel® Extension for PyTorch* v2.1.30+xpu. This is an update release which supports Intel® GPU platforms (Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series and Intel® Arc™ A-Series Graphics) based on PyTorch* 2.1.0.

Highlights

  • Intel® oneDNN v3.4.1 integration

  • Intel® oneAPI Base Toolkit 2024.1 compatibility

  • Large Language Model (LLM) optimizations for FP16 inference on Intel® Data Center GPU Max Series (Beta): Intel® Extension for PyTorch* provides a lot of specific optimizations for LLM workloads in this release on Intel® Data Center GPU Max Series. In operator level, we provide highly efficient GEMM kernel to speed up Linear layer and customized fused operators to reduce HBM access/kernel launch overhead. To reduce memory footprint, we define a segment KV Cache policy to save device memory and improve the throughput. Such optimizations are added in this release to enhance existing optimized LLM FP16 models and more Chinese LLM models such as Baichuan2-13B, ChatGLM3-6B and Qwen-7B.

  • LLM optimizations for INT4 inference on Intel® Data Center GPU Max Series and Intel® Arc™ A-Series Graphics (Prototype): Intel® Extension for PyTorch* shows remarkable performance when executing LLM models on Intel® GPU. However, deploying such models on GPUs with limited resources is challenging due to their high computational and memory requirements. To achieve a better trade-off, a low-precision solution, e.g., weight-only-quantization for INT4 is enabled to allow Llama 2-7B, GPT-J-6B and Qwen-7B to be executed efficiently on Intel® Arc™ A-Series Graphics. The same optimization makes INT4 models achieve 1.5x speeded up in total latency performance compared with FP16 models with the same configuration and parameters on Intel® Data Center GPU Max Series.

  • Opt-in collective performance optimization with oneCCL Bindings for Pytorch*: This opt-in feature can be enabled by setting TORCH_LLM_ALLREDUCE=1 to provide better scale-up performance by enabling optimized collectives such as allreduce, allgather, reducescatter algorithms in Intel® oneCCL. This feature requires XeLink enabled for cross-cards communication.

Known Issues

Please refer to Known Issues webpage.

Intel® Extension for PyTorch* v2.1.20+xpu Release Notes

30 Mar 01:25
0e2bee2
Compare
Choose a tag to compare

2.1.20+xpu

We are excited to announce the release of Intel® Extension for PyTorch* v2.1.20+xpu. This is a minor release which supports Intel® GPU platforms (Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series and Intel® Arc™ A-Series Graphics) based on PyTorch* 2.1.0.

Highlights

  • Intel® oneAPI Base Toolkit 2024.1 compatibility
  • Intel® oneDNN v3.4 integration
  • LLM inference scaling optimization based on Intel® oneCCL 2021.12 (Prototype)
  • Bug fixing and other optimization
    • Uplift XeTLA to v0.3.4.1 #3696
    • [SDP] Fallback unsupported bias size to native impl #3706
    • Error handling enhancement #3788, #3841
    • Fix beam search accuracy issue in workgroup reduce #3796
    • Support int32 index tensor in index operator #3808
    • Add deepspeed in LLM dockerfile #3829
    • Fix batch norm accuracy issue #3882
    • Prebuilt wheel dockerfile update #3887, #3970
    • Fix windows build failure with Intel® oneMKL 2024.1 in torch_patches #18
    • Fix FFT core dump issue with Intel® oneMKL 2024.1 in torch_patches #20, #21

Known Issues

Please refer to Known Issues webpage.

Intel® Extension for PyTorch* v2.2.0+cpu Release Notes

06 Feb 10:08
Compare
Choose a tag to compare

We are excited to announce the release of Intel® Extension for PyTorch* 2.2.0+cpu which accompanies PyTorch 2.2. This release mainly brings in our latest optimization on Large Language Model (LLM) including new dedicated API set (ipex.llm), new capability for auto-tuning accuracy recipe for LLM, and a broader list of optimized LLM models, together with a set of bug fixing and small optimization. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Large Language Model (LLM) optimization

    Intel® Extension for PyTorch* provides a new dedicated module, ipex.llm, to host for Large Language Models (LLMs) specific APIs. With ipex.llm, Intel® Extension for PyTorch* provides comprehensive LLM optimization cross various popular datatypes including FP32/BF16/INT8/INT4. Specifically for low precision, both SmoothQuant and Weight-Only quantization are supported for various scenarios. And user can also run Intel® Extension for PyTorch* with Tensor Parallel to fit in the multiple ranks or multiple nodes scenarios to get even better performance.

    A typical API under this new module is ipex.llm.optimize, which is designed to optimize transformer-based models within frontend Python modules, with a particular focus on Large Language Models (LLMs). It provides optimizations for both model-wise and content-generation-wise. ipex.llm.optimize is an upgrade API to replace previous ipex.optimize_transformers, which will bring you more consistent LLM experience and performance. Below shows a simple example of ipex.llm.optimize for fp32 or bf16 inference:

    import torch
    import intel_extension_for_pytorch as ipex
    import transformers
    
    model= transformers.AutoModelForCausalLM(model_name_or_path).eval()
    
    dtype = torch.float # or torch.bfloat16
    model = ipex.llm.optimize(model, dtype=dtype)
    
    model.generate(YOUR_GENERATION_PARAMS)

    More examples of this API can be found at LLM optimization API.

    Besides the new optimization API for LLM inference, Intel® Extension for PyTorch* also provides new capability for users to auto-tune a good quantization recipe for running SmoothQuant INT8 with good accuracy. SmoothQuant is a popular method to improve the accuracy of int8 quantization. The new auto-tune API allows automatic global alpha tuning, and automatic layer-by-layer alpha tuning provided by Intel® Neural Compressor for the best INT8 accuracy. More details can be found at SmoothQuant Recipe Tuning API Introduction.

    Intel® Extension for PyTorch* newly optimized many more LLM models including more llama2 variance like llama2-13b/llama2-70b, encoder-decoder model like T5, code generation models like starcoder/codegen, and more like Baichuan, Baichuan2, ChatGLM2, ChatGLM3, mistral, mpt, dolly, etc.. A full list of optimized models can be found at LLM Optimization.

  • Bug fixing and other optimization

Full Changelog: v2.1.100+cpu...v2.2.0+cpu