@@ -29,13 +29,14 @@ The table below shows which functions are available for use with CPU / Intel dGP
29
29
| ` barrier ` | √ | √ |
30
30
31
31
32
- ## Pytorch API Align
32
+ ## PyTorch API Align
33
33
34
- We recommend using Anaconda as Python package management system. The followings are the corresponding branches (tags) of ` oneccl_bindings_for_pytorch ` and supported Pytorch .
34
+ We recommend using Anaconda as Python package management system. The followings are the corresponding branches (tags) of ` oneccl_bindings_for_pytorch ` and supported PyTorch .
35
35
36
36
| ` torch ` | ` oneccl_bindings_for_pytorch ` |
37
37
| :-------------------------------------------------------------: | :-----------------------------------------------------------------------: |
38
38
| ` master ` | ` master ` |
39
+ | [ v2.3.1] ( https://github.com/pytorch/pytorch/tree/v2.3.1 ) | [ ccl_torch2.3.100] ( https://github.com/intel/torch-ccl/tree/ccl_torch2.3.100+xpu ) |
39
40
| [ v2.1.0] ( https://github.com/pytorch/pytorch/tree/v2.1.0 ) | [ ccl_torch2.1.400] ( https://github.com/intel/torch-ccl/tree/ccl_torch2.1.400+xpu ) |
40
41
| [ v2.1.0] ( https://github.com/pytorch/pytorch/tree/v2.1.0 ) | [ ccl_torch2.1.300] ( https://github.com/intel/torch-ccl/tree/ccl_torch2.1.300+xpu ) |
41
42
| [ v2.1.0] ( https://github.com/pytorch/pytorch/tree/v2.1.0 ) | [ ccl_torch2.1.200] ( https://github.com/intel/torch-ccl/tree/ccl_torch2.1.200+xpu ) |
@@ -58,19 +59,19 @@ The usage details can be found in the README of corresponding branch.
58
59
59
60
- Python 3.8 or later and a C++17 compiler
60
61
61
- - PyTorch v2.1.0
62
+ - PyTorch v2.3.1
62
63
63
64
## Build Option List
64
65
65
66
The following build options are supported in Intel® oneCCL Bindings for PyTorch* .
66
67
67
68
| Build Option | Default Value | Description |
68
69
| :---------------------------------- | :------------- | :-------------------------------------------------------------------------------------------------- |
69
- | COMPUTE_BACKEND | | Set oneCCL ` COMPUTE_BACKEND ` , set to ` dpcpp ` and use DPC++ compiler to enable support for Intel XPU |
70
+ | COMPUTE_BACKEND | N/A | Set oneCCL ` COMPUTE_BACKEND ` , set to ` dpcpp ` and use DPC++ compiler to enable support for Intel XPU |
70
71
| USE_SYSTEM_ONECCL | OFF | Use oneCCL library in system |
71
72
| CCL_PACKAGE_NAME | oneccl-bind-pt | Set wheel name |
72
73
| ONECCL_BINDINGS_FOR_PYTORCH_BACKEND | cpu | Set backend |
73
- | CCL_SHA_VERSION | False | Add git head sha version to be wheel name |
74
+ | CCL_SHA_VERSION | False | Add git head sha version into wheel name |
74
75
75
76
## Launch Option List
76
77
@@ -92,7 +93,7 @@ The following launch options are supported in Intel® oneCCL Bindings for PyTorc
92
93
93
94
``` bash
94
95
git clone https://github.com/intel/torch-ccl.git && cd torch-ccl
95
- git checkout ccl_torch2.1.400 +xpu
96
+ git checkout ccl_torch2.3.100 +xpu
96
97
git submodule sync
97
98
git submodule update --init --recursive
98
99
```
@@ -116,6 +117,7 @@ Wheel files are available for the following Python versions. Please always use t
116
117
117
118
| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | Python 3.11 |
118
119
| :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | :---------: |
120
+ | 2.3.100 | | | √ | √ | √ | √ |
119
121
| 2.1.400 | | | √ | √ | √ | √ |
120
122
| 2.1.300 | | | √ | √ | √ | √ |
121
123
| 2.1.200 | | | √ | √ | √ | √ |
@@ -128,7 +130,7 @@ Wheel files are available for the following Python versions. Please always use t
128
130
| 1.10.0 | √ | √ | √ | √ | | |
129
131
130
132
``` bash
131
- python -m pip install oneccl_bind_pt==2.1.400 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
133
+ python -m pip install oneccl_bind_pt==2.3.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
132
134
```
133
135
134
136
** Note:** Please set proxy or update URL address to https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/ if you meet connection issue.
0 commit comments