Skip to content

Commit 155ccbb

Browse files
mcr229kirklandsign
authored andcommitted
Fix up some Docs (#10038)
Propagating some changes made to the release/0.6 docs so that future release can get them too
1 parent fe6ed07 commit 155ccbb

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

docs/source/using-executorch-building-from-source.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Windows (x86_64)
2929
- Otherwise, Python's built-in virtual environment manager `python venv` is a good alternative.
3030
* `g++` version 7 or higher, `clang++` version 5 or higher, or another
3131
C++17-compatible toolchain.
32-
* `python` version 3.10-3.12
32+
* `python` version 3.10-3.12
3333

3434
Note that the cross-compilable core runtime code supports a wider range of
3535
toolchains, down to C++17. See the [Runtime Overview](./runtime-overview.md) for
@@ -231,7 +231,7 @@ Assuming Android NDK is available, run:
231231
mkdir cmake-android-out && cd cmake-android-out
232232
233233
# point -DCMAKE_TOOLCHAIN_FILE to the location where ndk is installed
234-
cmake -DCMAKE_TOOLCHAIN_FILE=/Users/{user_name}/Library/Android/sdk/ndk/27.2.12479018/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a ..
234+
cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a ..
235235
236236
cd ..
237237
cmake --build cmake-android-out -j9

docs/source/using-executorch-faqs.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ if you are using Ubuntu, or use an equivalent install command.
1818

1919
### Missing out variants: { _ }
2020

21-
The model likely contains torch custom operators. Custom ops need an Executorch implementation and need to be loaded at export time. See the [ExecuTorch Custom Ops Documentation](https://pytorch.org/executorch/main/kernel-library-custom-aten-kernel.html#apis) for details on how to do this.
21+
The model likely contains torch custom operators. Custom ops need an Executorch implementation and need to be loaded at export time. See the [ExecuTorch Custom Ops Documentation](kernel-library-custom-aten-kernel.md#apis) for details on how to do this.
2222

2323
### RuntimeError: PyTorch convert function for op _ not implemented
2424

@@ -32,7 +32,7 @@ ExecuTorch error codes are defined in [executorch/core/runtime/error.h](https://
3232

3333
If building the runtime from source, ensure that the build is done in release mode. For CMake builds, this can be done by passing `-DCMAKE_BUILD_TYPE=Release`.
3434

35-
Ensure the model is delegated. If not targeting a specific accelerator, use the XNNPACK delegate for CPU performance. Undelegated operators will typically fall back to the ExecuTorch portable library, which is designed as a fallback, and is not intended for performance sensitive operators. To target XNNPACK, pass an `XnnpackPartitioner` to `to_edge_transform_and_lower`. See [Building and Running ExecuTorch with XNNPACK Backend](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering.html) for more information.
35+
Ensure the model is delegated. If not targeting a specific accelerator, use the XNNPACK delegate for CPU performance. Undelegated operators will typically fall back to the ExecuTorch portable library, which is designed as a fallback, and is not intended for performance sensitive operators. To target XNNPACK, pass an `XnnpackPartitioner` to `to_edge_transform_and_lower`. See [Building and Running ExecuTorch with XNNPACK Backend](tutorial-xnnpack-delegate-lowering.md) for more information.
3636

3737
Thread count can have a significant impact on CPU performance. The optimal thread count may depend on the model and application. By default, ExecuTorch will currently use as many threads as there are cores. Consider setting the thread count to cores / 2, or just set to 4 on mobile CPUs.
3838

@@ -41,11 +41,11 @@ Thread count can be set with the following function. Ensure this is done prior t
4141
::executorch::extension::threadpool::get_threadpool()->_unsafe_reset_threadpool(num_threads);
4242
```
4343

44-
For a deeper investgiation into model performance, ExecuTorch supports operator-level performance profiling. See [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.html) for more information.
44+
For a deeper investgiation into model performance, ExecuTorch supports operator-level performance profiling. See [Using the ExecuTorch Developer Tools to Profile a Model](devtools-integration-tutorial.md) for more information.
4545

4646
### Missing Logs
4747

48-
ExecuTorch provides hooks to route runtime logs. By default, logs are sent to stdout/stderr, but users can override `et_pal_emit_log_message` to route logs to a custom destination. The Android and iOS extensions also provide out-of-box log routing to the appropriate platform logs. See [Runtime Platform Abstraction Layer (PAL)](https://pytorch.org/executorch/main/runtime-platform-abstraction-layer.html) for more information.
48+
ExecuTorch provides hooks to route runtime logs. By default, logs are sent to stdout/stderr, but users can override `et_pal_emit_log_message` to route logs to a custom destination. The Android and iOS extensions also provide out-of-box log routing to the appropriate platform logs. See [Runtime Platform Abstraction Layer (PAL)](runtime-platform-abstraction-layer.md) for more information.
4949

5050
### Error setting input: 0x10 / Attempted to resize a bounded tensor...
5151

0 commit comments

Comments
 (0)