Skip to content

Commit cd83344

Browse files
authored
update README for 2.1.400+xpu (#205)
1 parent 73b5ae7 commit cd83344

File tree

1 file changed

+9
-6
lines changed

1 file changed

+9
-6
lines changed

README.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ This repository holds PyTorch bindings maintained by Intel® for the Intel® one
66

77
[PyTorch](https://github.com/pytorch/pytorch) is an open-source machine learning framework.
88

9-
[Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training, implementing collectives like `allreduce`, `allgather`, `alltoall`. For more information on oneCCL, please refer to the [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html).
9+
[Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training, implementing collectives like `allreduce`, `allgather`, `alltoall`. For more information on oneCCL, please refer to the [oneCCL documentation](https://oneapi-spec.uxlfoundation.org/specifications/oneapi/latest/elements/oneccl/source/).
1010

1111
`oneccl_bindings_for_pytorch` module implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now.
1212

@@ -23,7 +23,7 @@ The table below shows which functions are available for use with CPU / Intel dGP
2323
| `reduce` |||
2424
| `all_gather` |||
2525
| `gather` |||
26-
| `scatter` | × | × |
26+
| `scatter` | | |
2727
| `reduce_scatter` |||
2828
| `all_to_all` |||
2929
| `barrier` |||
@@ -36,6 +36,7 @@ We recommend using Anaconda as Python package management system. The followings
3636
| `torch` | `oneccl_bindings_for_pytorch` |
3737
| :-------------------------------------------------------------: | :-----------------------------------------------------------------------: |
3838
| `master` | `master` |
39+
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.400](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.400+xpu) |
3940
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.300](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.300+xpu) |
4041
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.200](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.200+xpu) |
4142
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.100](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.100+xpu) |
@@ -65,7 +66,7 @@ The following build options are supported in Intel® oneCCL Bindings for PyTorch
6566

6667
| Build Option | Default Value | Description |
6768
| :---------------------------------- | :------------- | :-------------------------------------------------------------------------------------------------- |
68-
| COMPUTE_BACKEND | | Set oneCCL `COMPUTE_BACKEND`,set to `dpcpp` and use DPC++ compiler to enable support for Intel XPU |
69+
| COMPUTE_BACKEND | | Set oneCCL `COMPUTE_BACKEND`, set to `dpcpp` and use DPC++ compiler to enable support for Intel XPU |
6970
| USE_SYSTEM_ONECCL | OFF | Use oneCCL library in system |
7071
| CCL_PACKAGE_NAME | oneccl-bind-pt | Set wheel name |
7172
| ONECCL_BINDINGS_FOR_PYTORCH_BACKEND | cpu | Set backend |
@@ -77,7 +78,7 @@ The following launch options are supported in Intel® oneCCL Bindings for PyTorc
7778

7879
| Launch Option | Default Value | Description |
7980
| :--------------------------------------- | :------------ | :-------------------------------------------------------------------- |
80-
| ONECCL_BINDINGS_FOR_PYTORCH_ENV_VERBOSE | 0 | Set verbose level in ONECCL_BINDINGS_FOR_PYTORCH |
81+
| ONECCL_BINDINGS_FOR_PYTORCH_ENV_VERBOSE | 0 | Set verbose level in oneccl_bindings_for_pytorch |
8182
| ONECCL_BINDINGS_FOR_PYTORCH_ENV_WAIT_GDB | 0 | Set 1 to force the oneccl_bindings_for_pytorch wait for GDB attaching |
8283
| TORCH_LLM_ALLREDUCE | 0 | Set 1 to enable this prototype feature for better scale-up performance. This is a prototype feature to provide better scale-up performance by enabling optimized collective algorithms in oneCCL and asynchronous execution in torch-ccl. This feature requires XeLink enabled for cross-cards communication.|
8384
| CCL_BLOCKING_WAIT | 0 | Set 1 to enable this prototype feature, which is to control whether collectives execution on XPU is host blocking or non-blocking. |
@@ -91,6 +92,7 @@ The following launch options are supported in Intel® oneCCL Bindings for PyTorc
9192

9293
```bash
9394
git clone https://github.com/intel/torch-ccl.git && cd torch-ccl
95+
git checkout ccl_torch2.1.400+xpu
9496
git submodule sync
9597
git submodule update --init --recursive
9698
```
@@ -108,12 +110,13 @@ The following launch options are supported in Intel® oneCCL Bindings for PyTorc
108110
USE_SYSTEM_ONECCL=ON COMPUTE_BACKEND=dpcpp python setup.py install
109111
```
110112

111-
### Install PreBuilt Wheel
113+
### Install Prebuilt Wheel
112114

113115
Wheel files are available for the following Python versions. Please always use the latest release to get started.
114116

115117
| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | Python 3.11 |
116118
| :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | :---------: |
119+
| 2.1.400 | | |||||
117120
| 2.1.300 | | |||||
118121
| 2.1.200 | | |||||
119122
| 2.1.100 | | |||||
@@ -125,7 +128,7 @@ Wheel files are available for the following Python versions. Please always use t
125128
| 1.10.0 ||||| | |
126129

127130
```bash
128-
python -m pip install oneccl_bind_pt==2.1.300 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
131+
python -m pip install oneccl_bind_pt==2.1.400 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
129132
```
130133

131134
**Note:** Please set proxy or update URL address to https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/ if you meet connection issue.

0 commit comments

Comments
 (0)