Skip to content

Commit 963e686

Browse files
pytorchbotmcr229
andauthored
Replace buck with cmake in docs (#3739) (#3778)
Summary: Pull Request resolved: #3739 Per: https://docs.google.com/spreadsheets/d/1PoJt7P9qMkFSaMmS9f9j8dVcTFhOmNHotQYpwBySydI/edit#gid=0 We are also deprecating buck in docs from Gasoonjia Reviewed By: Gasoonjia Differential Revision: D57795491 fbshipit-source-id: dc18ac923e390cfc28fc7e46b234ac338c60061d (cherry picked from commit 0b34dc5) Co-authored-by: Max Ren <maxren@meta.com>
1 parent 91b9638 commit 963e686

File tree

2 files changed

+61
-28
lines changed

2 files changed

+61
-28
lines changed

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ cmake \
152152
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
153153
-DEXECUTORCH_BUILD_XNNPACK=ON \
154154
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
155-
-DEXECUTORCH_ENABLE_LOGGING=1 \
155+
-DEXECUTORCH_ENABLE_LOGGING=ON \
156156
-DPYTHON_EXECUTABLE=python \
157157
-Bcmake-out .
158158
```
@@ -169,15 +169,5 @@ Now you should be able to find the executable built at `./cmake-out/backends/xnn
169169
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_q8.pte
170170
```
171171

172-
173-
## Running the XNNPACK Model with Buck
174-
Alternatively, you can use `buck2` to run the `.pte` file with XNNPACK delegate instructions in it on your host platform. You can follow the instructions here to install [buck2](getting-started-setup.md#Build-&-Run). You can now run it with the prebuilt `xnn_executor_runner` provided in the examples. This will run the model on some sample inputs.
175-
176-
```bash
177-
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_fp32.pte
178-
# or to run the quantized variant
179-
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_q8.pte
180-
```
181-
182172
## Building and Linking with the XNNPACK Backend
183-
You can build the XNNPACK backend [BUCK target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/targets.bzl#L54) and [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.
173+
You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.

examples/xnnpack/README.md

Lines changed: 59 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,38 @@ The following command will produce a floating-point XNNPACK delegated model `mv2
2525
python3 -m examples.xnnpack.aot_compiler --model_name="mv2" --delegate
2626
```
2727

28-
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `xnn_executor_runner`.
28+
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `xnn_executor_runner`. With cmake, you first configure your cmake with the following:
2929

3030
```bash
31-
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_fp32.pte
31+
# cd to the root of executorch repo
32+
cd executorch
33+
34+
# Get a clean cmake-out directory
35+
rm- -rf cmake-out
36+
mkdir cmake-out
37+
38+
# Configure cmake
39+
cmake \
40+
-DCMAKE_INSTALL_PREFIX=cmake-out \
41+
-DCMAKE_BUILD_TYPE=Release \
42+
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
43+
-DEXECUTORCH_BUILD_XNNPACK=ON \
44+
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
45+
-DEXECUTORCH_ENABLE_LOGGING=ON \
46+
-DPYTHON_EXECUTABLE=python \
47+
-Bcmake-out .
48+
```
49+
50+
Then you can build the runtime components with
51+
52+
```bash
53+
cmake --build cmake-out -j9 --target install --config Release
54+
```
55+
56+
Now finally you should be able to run this model with the following command
57+
58+
```bash
59+
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path ./mv2_xnnpack_fp32.pte
3260
```
3361

3462
## Quantization
@@ -38,11 +66,7 @@ Here we will discuss quantizing a model suitable for XNNPACK delegation using XN
3866

3967
Though it is typical to run this quantized mode via XNNPACK delegate, we want to highlight that this is just another quantization flavor, and we can run this quantized model without necessarily using XNNPACK delegate, but only using standard quantization operators.
4068

41-
A shared library to register the out variants of the quantized operators (e.g., `quantized_decomposed::add.out`) into EXIR is required. To generate this library, run the following command if using `buck2`:
42-
```bash
43-
buck2 build //kernels/quantized:aot_lib --show-output
44-
```
45-
Or if on cmake, follow the instructions in `test_quantize.sh` to build it, the default path is `cmake-out/kernels/quantized/libquantized_ops_lib.so`.
69+
A shared library to register the out variants of the quantized operators (e.g., `quantized_decomposed::add.out`) into EXIR is required. On cmake, follow the instructions in `test_quantize.sh` to build it, the default path is `cmake-out/kernels/quantized/libquantized_ops_lib.so`.
4670

4771
Then you can generate a XNNPACK quantized model with the following command by passing the path to the shared library into the script `quantization/example.py`:
4872
```bash
@@ -55,12 +79,37 @@ You can find more valid quantized example models by running:
5579
python3 -m examples.xnnpack.quantization.example --help
5680
```
5781

58-
A quantized model can be run via `executor_runner`:
82+
## Running the XNNPACK Model with CMake
83+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
5984
```bash
60-
buck2 run examples/portable/executor_runner:executor_runner -- --model_path ./mv2_quantized.pte
85+
# cd to the root of executorch repo
86+
cd executorch
87+
88+
# Get a clean cmake-out directory
89+
rm- -rf cmake-out
90+
mkdir cmake-out
91+
92+
# Configure cmake
93+
cmake \
94+
-DCMAKE_INSTALL_PREFIX=cmake-out \
95+
-DCMAKE_BUILD_TYPE=Release \
96+
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
97+
-DEXECUTORCH_BUILD_XNNPACK=ON \
98+
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
99+
-DEXECUTORCH_ENABLE_LOGGING=ON \
100+
-DPYTHON_EXECUTABLE=python \
101+
-Bcmake-out .
61102
```
62-
Please note that running a quantized model will require the presence of various quantized/dequantize operators in the [quantized kernel lib](../../kernels/quantized).
103+
Then you can build the runtime componenets with
63104

105+
```bash
106+
cmake --build cmake-out -j9 --target install --config Release
107+
```
108+
109+
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
110+
```bash
111+
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_quantized.pte
112+
```
64113

65114
## Delegating a Quantized Model
66115

@@ -69,9 +118,3 @@ The following command will produce a XNNPACK quantized and delegated model `mv2_
69118
```bash
70119
python3 -m examples.xnnpack.aot_compiler --model_name "mv2" --quantize --delegate
71120
```
72-
73-
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `xnn_executor_runner`.
74-
75-
```bash
76-
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_q8.pte
77-
```

0 commit comments

Comments
 (0)