Skip to content

Commit d4cd0f4

Browse files
mcr229facebook-github-bot
authored andcommitted
Replace buck with cmake in docs
Summary: Per: https://docs.google.com/spreadsheets/d/1PoJt7P9qMkFSaMmS9f9j8dVcTFhOmNHotQYpwBySydI/edit#gid=0 We are also deprecating buck in docs from Gasoonjia Differential Revision: D57795491
1 parent 5e9db9a commit d4cd0f4

File tree

2 files changed

+29
-14
lines changed

2 files changed

+29
-14
lines changed

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ cmake \
152152
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
153153
-DEXECUTORCH_BUILD_XNNPACK=ON \
154154
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
155-
-DEXECUTORCH_ENABLE_LOGGING=1 \
155+
-DEXECUTORCH_ENABLE_LOGGING=ON \
156156
-DPYTHON_EXECUTABLE=python \
157157
-Bcmake-out .
158158
```
@@ -169,15 +169,5 @@ Now you should be able to find the executable built at `./cmake-out/backends/xnn
169169
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_q8.pte
170170
```
171171

172-
173-
## Running the XNNPACK Model with Buck
174-
Alternatively, you can use `buck2` to run the `.pte` file with XNNPACK delegate instructions in it on your host platform. You can follow the instructions here to install [buck2](getting-started-setup.md#Build-&-Run). You can now run it with the prebuilt `xnn_executor_runner` provided in the examples. This will run the model on some sample inputs.
175-
176-
```bash
177-
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_fp32.pte
178-
# or to run the quantized variant
179-
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_q8.pte
180-
```
181-
182172
## Building and Linking with the XNNPACK Backend
183173
You can build the XNNPACK backend [BUCK target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/targets.bzl#L54) and [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.

examples/xnnpack/README.md

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -88,12 +88,37 @@ You can find more valid quantized example models by running:
8888
python3 -m examples.xnnpack.quantization.example --help
8989
```
9090

91-
A quantized model can be run via `executor_runner`:
91+
## Running the XNNPACK Model with CMake
92+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
9293
```bash
93-
buck2 run examples/portable/executor_runner:executor_runner -- --model_path ./mv2_quantized.pte
94+
# cd to the root of executorch repo
95+
cd executorch
96+
97+
# Get a clean cmake-out directory
98+
rm- -rf cmake-out
99+
mkdir cmake-out
100+
101+
# Configure cmake
102+
cmake \
103+
-DCMAKE_INSTALL_PREFIX=cmake-out \
104+
-DCMAKE_BUILD_TYPE=Release \
105+
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
106+
-DEXECUTORCH_BUILD_XNNPACK=ON \
107+
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
108+
-DEXECUTORCH_ENABLE_LOGGING=ON \
109+
-DPYTHON_EXECUTABLE=python \
110+
-Bcmake-out .
94111
```
95-
Please note that running a quantized model will require the presence of various quantized/dequantize operators in the [quantized kernel lib](../../kernels/quantized).
112+
Then you can build the runtime componenets with
96113

114+
```bash
115+
cmake --build cmake-out -j9 --target install --config Release
116+
```
117+
118+
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
119+
```bash
120+
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_quantized.pte
121+
```
97122

98123
## Delegating a Quantized Model
99124

0 commit comments

Comments
 (0)