Skip to content

Merge updated documentation into master #8638

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Feb 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backends/vulkan/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ExecuTorch Vulkan Delegate
# Vulkan Backend

The ExecuTorch Vulkan delegate is a native GPU delegate for ExecuTorch that is
built on top of the cross-platform Vulkan GPU API standard. It is primarily
Expand Down
2 changes: 1 addition & 1 deletion docs/source/api-life-cycle.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ExecuTorch API Life Cycle and Deprecation Policy
# API Life Cycle and Deprecation Policy

## API Life Cycle

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ExecuTorch XNNPACK delegate
# XNNPACK Delegate Internals

This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases.

Expand Down
15 changes: 15 additions & 0 deletions docs/source/backend-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Backend Template

## Features

## Target Requirements

## Development Requirements

## Lowering a Model to *Backend Name*

### Partitioner API

### Quantization

## Runtime Integration
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
<!---- Name is a WIP - this reflects better what it can do today ----->
# Building and Running ExecuTorch with ARM Ethos-U Backend
# ARM Ethos-U Backend

<!----This will show a grid card on the page----->
::::{grid} 2

:::{grid-item-card} Tutorials we recommend you complete before this:
:class-card: card-prerequisites
* [Introduction to ExecuTorch](./intro-how-it-works.md)
* [Setting up ExecuTorch](./getting-started-setup.md)
* [Building ExecuTorch with CMake](./runtime-build-and-cross-compilation.md)
* [Getting Started](./getting-started.md)
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
:::

:::{grid-item-card} What you will learn in this tutorial:
Expand Down Expand Up @@ -286,7 +286,7 @@ The `generate_pte_file` function in `run.sh` script produces the `.pte` files ba

ExecuTorch's CMake build system produces a set of build pieces which are critical for us to include and run the ExecuTorch runtime with-in the bare-metal environment we have for Corstone FVPs from Ethos-U SDK.

[This](./runtime-build-and-cross-compilation.md) document provides a detailed overview of each individual build piece. For running either variant of the `.pte` file, we will need a core set of libraries. Here is a list,
[This](./using-executorch-building-from-source.md) document provides a detailed overview of each individual build piece. For running either variant of the `.pte` file, we will need a core set of libraries. Here is a list,

- `libexecutorch.a`
- `libportable_kernels.a`
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Building and Running ExecuTorch on Xtensa HiFi4 DSP
# Cadence Xtensa Backend


In this tutorial we will walk you through the process of getting setup to build ExecuTorch for an Xtensa HiFi4 DSP and running a simple model on it.
Expand All @@ -17,9 +17,9 @@ On top of being able to run on the Xtensa HiFi4 DSP, another goal of this tutori
:::
:::{grid-item-card} Tutorials we recommend you complete before this:
:class-card: card-prerequisites
* [Introduction to ExecuTorch](intro-how-it-works.md)
* [Setting up ExecuTorch](getting-started-setup.md)
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
* [Introduction to ExecuTorch](./intro-how-it-works.md)
* [Getting Started](./getting-started.md)
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
:::
::::

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Building and Running ExecuTorch with Core ML Backend
# Core ML Backend

Core ML delegate uses Core ML APIs to enable running neural networks via Apple's hardware acceleration. For more about Core ML you can read [here](https://developer.apple.com/documentation/coreml). In this tutorial, we will walk through the steps of lowering a PyTorch model to Core ML delegate

Expand All @@ -11,9 +11,9 @@ Core ML delegate uses Core ML APIs to enable running neural networks via Apple's
:::
:::{grid-item-card} Tutorials we recommend you complete before this:
:class-card: card-prerequisites
* [Introduction to ExecuTorch](intro-how-it-works.md)
* [Setting up ExecuTorch](getting-started-setup.md)
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
* [Introduction to ExecuTorch](./intro-how-it-works.md)
* [Getting Started](./getting-started.md)
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
* [ExecuTorch iOS Demo App](demo-apps-ios.md)
:::
::::
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Building and Running ExecuTorch with MediaTek Backend
# MediaTek Backend

MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices that equips with MediaTek Neuron Processing Unit (NPU). This document offers a step-by-step guide to set up the build environment for the MediaTek ExecuTorch libraries.

Expand All @@ -11,9 +11,9 @@ MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices
:::
:::{grid-item-card} Tutorials we recommend you complete before this:
:class-card: card-prerequisites
* [Introduction to ExecuTorch](intro-how-it-works.md)
* [Setting up ExecuTorch](getting-started-setup.md)
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
* [Introduction to ExecuTorch](./intro-how-it-works.md)
* [Getting Started](./getting-started.md)
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
:::
::::

Expand All @@ -34,7 +34,7 @@ MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices

Follow the steps below to setup your build environment:

1. **Setup ExecuTorch Environment**: Refer to the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) guide for detailed instructions on setting up the ExecuTorch environment.
1. **Setup ExecuTorch Environment**: Refer to the [Getting Started](getting-started.md) guide for detailed instructions on setting up the ExecuTorch environment.

2. **Setup MediaTek Backend Environment**
- Install the dependent libs. Ensure that you are inside `backends/mediatek/` directory
Expand Down Expand Up @@ -91,4 +91,4 @@ cd executorch

```bash
export LD_LIBRARY_PATH=<path_to_usdk>:<path_to_neuron_backend>:$LD_LIBRARY_PATH
```
```
157 changes: 157 additions & 0 deletions docs/source/backends-mps.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
# MPS Backend

In this tutorial we will walk you through the process of getting setup to build the MPS backend for ExecuTorch and running a simple model on it.

The MPS backend device maps machine learning computational graphs and primitives on the [MPS Graph](https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph?language=objc) framework and tuned kernels provided by [MPS](https://developer.apple.com/documentation/metalperformanceshaders?language=objc).

::::{grid} 2
:::{grid-item-card} What you will learn in this tutorial:
:class-card: card-prerequisites
* In this tutorial you will learn how to export [MobileNet V3](https://pytorch.org/vision/main/models/mobilenetv3.html) model to the MPS delegate.
* You will also learn how to compile and deploy the ExecuTorch runtime with the MPS delegate on macOS and iOS.
:::
:::{grid-item-card} Tutorials we recommend you complete before this:
:class-card: card-prerequisites
* [Introduction to ExecuTorch](./intro-how-it-works.md)
* [Getting Started](./getting-started.md)
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
* [ExecuTorch iOS Demo App](demo-apps-ios.md)
* [ExecuTorch iOS LLaMA Demo App](llm/llama-demo-ios.md)
:::
::::


## Prerequisites (Hardware and Software)

In order to be able to successfully build and run a model using the MPS backend for ExecuTorch, you'll need the following hardware and software components:

### Hardware:
- A [mac](https://www.apple.com/mac/) for tracing the model

### Software:

- **Ahead of time** tracing:
- [macOS](https://www.apple.com/macos/) 12

- **Runtime**:
- [macOS](https://www.apple.com/macos/) >= 12.4
- [iOS](https://www.apple.com/ios) >= 15.4
- [Xcode](https://developer.apple.com/xcode/) >= 14.1

## Setting up Developer Environment

***Step 1.*** Please finish tutorial [Getting Started](getting-started.md).

***Step 2.*** Install dependencies needed to lower MPS delegate:

```bash
./backends/apple/mps/install_requirements.sh
```

## Build

### AOT (Ahead-of-time) Components

**Compiling model for MPS delegate**:
- In this step, you will generate a simple ExecuTorch program that lowers MobileNetV3 model to the MPS delegate. You'll then pass this Program (the `.pte` file) during the runtime to run it using the MPS backend.

```bash
cd executorch
# Note: `mps_example` script uses by default the MPSPartitioner for ops that are not yet supported by the MPS delegate. To turn it off, pass `--no-use_partitioner`.
python3 -m examples.apple.mps.scripts.mps_example --model_name="mv3" --bundled --use_fp16

# To see all options, run following command:
python3 -m examples.apple.mps.scripts.mps_example --help
```

### Runtime

**Building the MPS executor runner:**
```bash
# In this step, you'll be building the `mps_executor_runner` that is able to run MPS lowered modules:
cd executorch
./examples/apple/mps/scripts/build_mps_executor_runner.sh
```

## Run the mv3 generated model using the mps_executor_runner

```bash
./cmake-out/examples/apple/mps/mps_executor_runner --model_path mv3_mps_bundled_fp16.pte --bundled_program
```

- You should see the following results. Note that no output file will be generated in this example:
```
I 00:00:00.003290 executorch:mps_executor_runner.mm:286] Model file mv3_mps_bundled_fp16.pte is loaded.
I 00:00:00.003306 executorch:mps_executor_runner.mm:292] Program methods: 1
I 00:00:00.003308 executorch:mps_executor_runner.mm:294] Running method forward
I 00:00:00.003311 executorch:mps_executor_runner.mm:349] Setting up non-const buffer 1, size 606112.
I 00:00:00.003374 executorch:mps_executor_runner.mm:376] Setting up memory manager
I 00:00:00.003376 executorch:mps_executor_runner.mm:392] Loading method name from plan
I 00:00:00.018942 executorch:mps_executor_runner.mm:399] Method loaded.
I 00:00:00.018944 executorch:mps_executor_runner.mm:404] Loading bundled program...
I 00:00:00.018980 executorch:mps_executor_runner.mm:421] Inputs prepared.
I 00:00:00.118731 executorch:mps_executor_runner.mm:438] Model executed successfully.
I 00:00:00.122615 executorch:mps_executor_runner.mm:501] Model verified successfully.
```

### [Optional] Run the generated model directly using pybind
1. Make sure `pybind` MPS support was installed:
```bash
./install_executorch.sh --pybind mps
```
2. Run the `mps_example` script to trace the model and run it directly from python:
```bash
cd executorch
# Check correctness between PyTorch eager forward pass and ExecuTorch MPS delegate forward pass
python3 -m examples.apple.mps.scripts.mps_example --model_name="mv3" --no-use_fp16 --check_correctness
# You should see following output: `Results between ExecuTorch forward pass with MPS backend and PyTorch forward pass for mv3_mps are matching!`

# Check performance between PyTorch MPS forward pass and ExecuTorch MPS forward pass
python3 -m examples.apple.mps.scripts.mps_example --model_name="mv3" --no-use_fp16 --bench_pytorch
```

### Profiling:
1. [Optional] Generate an [ETRecord](./etrecord.rst) while you're exporting your model.
```bash
cd executorch
python3 -m examples.apple.mps.scripts.mps_example --model_name="mv3" --generate_etrecord -b
```
2. Run your Program on the ExecuTorch runtime and generate an [ETDump](./etdump.md).
```
./cmake-out/examples/apple/mps/mps_executor_runner --model_path mv3_mps_bundled_fp16.pte --bundled_program --dump-outputs
```
3. Create an instance of the Inspector API by passing in the ETDump you have sourced from the runtime along with the optionally generated ETRecord from step 1.
```bash
python3 -m sdk.inspector.inspector_cli --etdump_path etdump.etdp --etrecord_path etrecord.bin
```

## Deploying and Running on Device

***Step 1***. Create the ExecuTorch core and MPS delegate frameworks to link on iOS
```bash
cd executorch
./build/build_apple_frameworks.sh --mps
```

`mps_delegate.xcframework` will be in `cmake-out` folder, along with `executorch.xcframework` and `portable_delegate.xcframework`:
```bash
cd cmake-out && ls
```

***Step 2***. Link the frameworks into your XCode project:
Go to project Target’s `Build Phases` - `Link Binaries With Libraries`, click the **+** sign and add the frameworks: files located in `Release` folder.
- `executorch.xcframework`
- `portable_delegate.xcframework`
- `mps_delegate.xcframework`

From the same page, include the needed libraries for the MPS delegate:
- `MetalPerformanceShaders.framework`
- `MetalPerformanceShadersGraph.framework`
- `Metal.framework`

In this tutorial, you have learned how to lower a model to the MPS delegate, build the mps_executor_runner and run a lowered model through the MPS delegate, or directly on device using the MPS delegate static library.


## Frequently encountered errors and resolution.

If you encountered any bugs or issues following this tutorial please file a bug/issue on the [ExecuTorch repository](https://github.com/pytorch/executorch/issues), with hashtag **#mps**.
20 changes: 20 additions & 0 deletions docs/source/backends-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Backend Overview

ExecuTorch backends provide hardware acceleration for a specific hardware target. In order to achieve maximum performance on target hardware, ExecuTorch optimizes the model for a specific backend during the export and lowering process. This means that the resulting .pte file is specialized for the specific hardware. In order to deploy to multiple backends, such as Core ML on iOS and Arm CPU on Android, it is common to generate a dedicated .pte file for each.

The choice of hardware backend is informed by the hardware that the model is intended to be deployed on. Each backend has specific hardware requires and level of model support. See the documentation for each hardware backend for more details.

As part of the .pte file creation process, ExecuTorch identifies portions of the model (partitions) that are supported for the given backend. These sections are processed by the backend ahead of time to support efficient execution. Portions of the model that are not supported on the delegate, if any, are executed using the portable fallback implementation on CPU. This allows for partial model acceleration when not all model operators are supported on the backend, but may have negative performance implications. In addition, multiple partitioners can be specified in order of priority. This allows for operators not supported on GPU to run on CPU via XNNPACK, for example.

### Available Backends

Commonly used hardware backends are listed below. For mobile, consider using XNNPACK for Android and XNNPACK or Core ML for iOS. To create a .pte file for a specific backend, pass the appropriate partitioner class to `to_edge_transform_and_lower`. See the appropriate backend documentation for more information.

- [XNNPACK (Mobile CPU)](backends-xnnpack.md)
- [Core ML (iOS)](backends-coreml.md)
- [Metal Performance Shaders (iOS GPU)](backends-mps.md)
- [Vulkan (Android GPU)](backends-vulkan.md)
- [Qualcomm NPU](backends-qualcomm.md)
- [MediaTek NPU](backends-mediatek.md)
- [Arm Ethos-U NPU](backends-arm-ethos-u.md)
- [Cadence DSP](backends-cadence.md)
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend
# Qualcomm AI Engine Backend

In this tutorial we will walk you through the process of getting started to
build ExecuTorch for Qualcomm AI Engine Direct and running a model on it.
Expand All @@ -14,9 +14,9 @@ Qualcomm AI Engine Direct is also referred to as QNN in the source and documenta
:::
:::{grid-item-card} Tutorials we recommend you complete before this:
:class-card: card-prerequisites
* [Introduction to ExecuTorch](intro-how-it-works.md)
* [Setting up ExecuTorch](getting-started-setup.md)
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
* [Introduction to ExecuTorch](./intro-how-it-works.md)
* [Getting Started](./getting-started.md)
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
:::
::::

Expand Down Expand Up @@ -347,7 +347,7 @@ The model, inputs, and output location are passed to `qnn_executorch_runner` by
### Running a model via ExecuTorch's android demo-app

An Android demo-app using Qualcomm AI Engine Direct Backend can be found in
`examples`. Please refer to android demo app [tutorial](https://pytorch.org/executorch/stable/demo-apps-android.html).
`examples`. Please refer to android demo app [tutorial](demo-apps-android.md).

## Supported model list

Expand Down
Loading
Loading