Skip to content

Add code structure and a few other links to CONTRIBUTING.md #9793

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 7, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 92 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,95 @@ Set up your environment by following the instructions at
https://pytorch.org/executorch/stable/getting-started-setup.html to clone
the repo and install the necessary requirements.

Refer to this [document](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) to build ExecuTorch from source.

### Dev Setup for Android
For Android, please refer to the [Android documentation](docs/source/using-executorch-android.md).

### Dev Setup for Apple
For Apple, please refer to the [iOS documentation](docs/source/using-executorch-ios.md).
 

## Codebase structure

<pre>

executorch
├── <a href="backends">backends</a> - Backend delegate implementations for various hardware targets. Each backend uses partitioner to split the graph into subgraphs that can be executed on specific hardware, quantizer to optimize model precision, and runtime components to execute the graph on target hardware. For details refer to the <a href="docs/source/backend-delegates-integration.md">backend documentation</a> and the <a href="docs/source/using-executorch-export.md">Export and Lowering tutorial</a> for more information.
│ ├── <a href="backends/apple">apple</a> - Apple-specific backends.
│ │ ├── <a href="backends/apple/coreml">coreml</a> - CoreML backend for Apple devices. See <a href="docs/source/backends-coreml.md">doc</a>.
│ │ └── <a href="backends/apple/mps">mps</a> - Metal Performance Shaders backend for Apple devices. See <a href="docs/source/backends-mps.md">doc</a>.
│ ├── <a href="backends/arm">arm</a> - ARM architecture backends. See <a href="docs/source/backends-arm-ethos-u.md">doc</a>.
│ ├── <a href="backends/cadence">cadence</a> - Cadence-specific backends. See <a href="docs/source/backends-cadence.md">doc</a>.
│ ├── <a href="backends/example">example</a> - Example backend implementations.
│ ├── <a href="backends/mediatek">mediatek</a> - MediaTek-specific backends. See <a href="docs/source/backends-mediatek.md">doc</a>.
│ ├── <a href="backends/openvino">openvino</a> - OpenVINO backend for Intel hardware.
│ ├── <a href="backends/qualcomm">qualcomm</a> - Qualcomm-specific backends. See <a href="docs/source/backends-qualcomm.md">doc</a>.
│ ├── <a href="backends/transforms">transforms</a> - Transformations for backend optimization.
│ ├── <a href="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <a href="docs/source/backends-vulkan.md">doc</a>.
│ └── <a href="backends/xnnpack">xnnpack</a> - XNNPACK backend for optimized neural network operations. See <a href="docs/source/backends-xnnpack.md">doc</a>.
├── <a href="codegen">codegen</a> - Tooling to autogenerate bindings between kernels and the runtime.
├── <a href="configurations">configurations</a> - Configuration files.
├── <a href="devtools">devtools</a> - Model profiling, debugging, and inspection. Please refer to the <a href="docs/source/devtools-overview.md">tools documentation</a> for more information.
│ ├── <a href="devtools/bundled_program">bundled_program</a> - a tool for validating ExecuTorch model. See <a href="docs/source/bundled-io.md">doc</a>.
│ ├── <a href="devtools/etdump">etdump</a> - ETDump - a format for saving profiling and debugging data from runtime. See <a href="docs/source/etdump.md">doc</a>.
│ ├── <a href="devtools/etrecord">etrecord</a> - ETRecord - AOT debug artifact for ExecuTorch. See <a href="https://pytorch.org/executorch/main/etrecord.html">doc</a>.
│ ├── <a href="devtools/inspector">inspector</a> - Python API to inspect ETDump and ETRecord. See <a href="https://pytorch.org/executorch/main/model-inspector.html">doc</a>.
│ └── <a href="devtools/visualization">visualization</a> - Visualization tools for representing model structure and performance metrics.
├── <a href="docs">docs</a> - Static docs tooling and documentation source files.
├── <a href="examples">examples</a> - Examples of various user flows, such as model export, delegates, and runtime execution.
├── <a href="exir">exir</a> - Ahead-of-time library: model capture and lowering APIs. EXport Intermediate Representation (EXIR) is a format for representing the result of <a href="https://pytorch.org/docs/main/export.ir_spec.html">torch.export</a>. This directory contains utilities and passes for lowering the EXIR graphs into different <a href="/docs/source/ir-exir.md">dialects</a> and eventually suitable to run on target hardware.
│ ├── <a href="exir/_serialize">_serialize</a> - Serialize final export artifact.
│ ├── <a href="exir/backend">backend</a> - Backend delegate ahead of time APIs.
│ ├── <a href="exir/capture">capture</a> - Program capture.
│ ├── <a href="exir/dialects">dialects</a> - Op sets for various dialects in the export process. Please refer to the <a href="/docs/source/ir-exir.md">EXIR spec</a> and the <a href="/docs/source/compiler-backend-dialect.md">backend dialect</a> doc for more details.
│ ├── <a href="exir/emit">emit</a> - Conversion from ExportedProgram to ExecuTorch execution instructions.
│ ├── <a href="exir/operator">operator</a> - Operator node manipulation utilities.
│ ├── <a href="exir/passes">passes</a> - Built-in compiler passes.
│ ├── <a href="exir/program">program</a> - Export artifacts.
│ ├── <a href="exir/serde">serde</a> - Graph module serialization/deserialization.
│ ├── <a href="exir/verification">verification</a> - IR verification.
├── <a href="extension">extension</a> - Extensions built on top of the runtime.
│ ├── <a href="extension/android">android</a> - ExecuTorch wrappers for Android apps. Please refer to the <a href="docs/source/using-executorch-android.md">Android documentation</a> and <a href="https://pytorch.org/executorch/main/javadoc/">Javadoc</a> for more information.
│ ├── <a href="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <a href="docs/source/using-executorch-ios.md">iOS documentation</a> and <a href="https://pytorch.org/executorch/stable/apple-runtime.html">how to integrate into Apple platform</a> for more information.
│ ├── <a href="extension/aten_util">aten_util</a> - Converts to and from PyTorch ATen types.
│ ├── <a href="extension/data_loader">data_loader</a> - 1st party data loader implementations.
│ ├── <a href="extension/evalue_util">evalue_util</a> - Helpers for working with EValue objects.
│ ├── <a href="extension/gguf_util">gguf_util</a> - Tools to convert from the GGUF format.
│ ├── <a href="extension/kernel_util">kernel_util</a> - Helpers for registering kernels.
│ ├── <a href="extension/llm">llm</a> - Library to run LLM on ExecuTorch including common optimization passes, runtime C++ components. Please refer to the <a href="docs/source/llm/getting-started.md">LLM documentation</a> for more information.
│ ├── <a href="extension/memory_allocator">memory_allocator</a> - 1st party memory allocator implementations.
│ ├── <a href="extension/module">module</a> - A simplified C++ wrapper for the runtime. An abstraction that deserializes and executes an ExecuTorch artifact (.pte file). Refer to the <a href="docs/source/extension-module.md">module documentation</a> for more information.
│ ├── <a href="extension/parallel">parallel</a> - C++ threadpool integration.
│ ├── <a href="extension/pybindings">pybindings</a> - Python API for executorch runtime. This is powering up the <a href="https://pytorch.org/executorch/main/runtime-python-api-reference.html">runtime Python API</a> for ExecuTorch.
│ ├── <a href="extension/pytree">pytree</a> - C++ and Python flattening and unflattening lib for pytrees.
│ ├── <a href="extension/runner_util">runner_util</a> - Helpers for writing C++ PTE-execution tools.
│ ├── <a href="extension/tensor">tensor</a> - Tensor maker and <code>TensorPtr</code>, details in <a href="/docs/source/extension-tensor.md">this documentation</a>. For how to use <code>TensorPtr</code> and <code>Module</code>, please refer to the <a href="/docs/source/using-executorch-cpp.md">"Using ExecuTorch with C++"</a> doc.
│ ├── <a href="extension/testing_util">testing_util</a> - Helpers for writing C++ tests.
│ ├── <a href="extension/threadpool">threadpool</a> - Threadpool.
│ └── <a href="extension/training">training</a> - Experimental libraries for on-device training.
├── <a href="kernels">kernels</a> - 1st party kernel implementations.
│ ├── <a href="kernels/aten">aten</a> - ATen kernel implementations.
│ ├── <a href="kernels/optimized">optimized</a> - Optimized kernel implementations.
│ ├── <a href="kernels/portable">portable</a> - Reference implementations of ATen operators.
│ ├── <a href="kernels/prim_ops">prim_ops</a> - Special ops used in executorch runtime for control flow and symbolic primitives.
│ └── <a href="kernels/quantized">quantized</a> - Quantized kernel implementations.
├── <a href="profiler">profiler</a> - Utilities for profiling runtime execution.
├── <a href="runtime">runtime</a> - Core C++ runtime. These components are used to execute the ExecuTorch program. Please refer to the <a href="docs/source/runtime-overview.md">runtime documentation</a> for more information.
│ ├── <a href="runtime/backend">backend</a> - Backend delegate runtime APIs.
│ ├── <a href="runtime/core">core</a> - Core structures used across all levels of the runtime. Basic components such as <code>Tensor</code>, <code>EValue</code>, <code>Error</code> and <code>Result</code> etc.
│ ├── <a href="runtime/executor">executor</a> - Model loading, initialization, and execution. Runtime components that execute the ExecuTorch program, such as <code>Program</code>, <code>Method</code>. Refer to the <a href="https://pytorch.org/executorch/main/executorch-runtime-api-reference.html">runtime API documentation</a> for more information.
│ ├── <a href="runtime/kernel">kernel</a> - Kernel registration and management.
│ └── <a href="runtime/platform">platform</a> - Layer between architecture specific code and portable C++.
├── <a href="schema">schema</a> - ExecuTorch PTE file format flatbuffer schemas.
├── <a href="scripts">scripts</a> - Utility scripts for building libs, size management, dependency management, etc.
├── <a href="shim">shim</a> - Compatibility layer between OSS and Internal builds.
├── <a href="test">test</a> - Broad scoped end-to-end tests.
├── <a href="third-party">third-party</a> - Third-party dependencies.
├── <a href="tools">tools</a> - Tools for building ExecuTorch from source, for different built tools (CMake, Buck).
└── <a href="util">util</a> - Various helpers and scripts.
</pre>

&nbsp;

## Contributing workflow
Expand Down Expand Up @@ -221,7 +310,7 @@ CI is run automatically on all pull requests. However, if you want to run tests

- The `sh test/build_size_test.sh` script will compile the C++runtime along with portable kernels.
- The `test/run_oss_cpp_tests.sh` script will build and run C++ tests locally
- Running `pytest` from the root directory will run Python tests locally.
- Running `pytest` from the root directory will run Python tests locally. Make sure to run this after finishing [Dev Install](#dev-install).

### Writing Tests
To help keep code quality high, ExecuTorch uses a combination of unit tests and
Expand Down Expand Up @@ -279,7 +368,8 @@ for basics.
- Good title: "Add XYZ method to ABC"
- Give the PR a clear and thorough description. Don't just describe what the PR
does: the diff will do that. Explain *why* you are making this change, in a
way that will make sense to someone years from now.
way that will make sense to someone years from now. If the PR is a bug fix,
include the issue number at the beginning of the description: "Fixes #1234"
- Explain how you have tested your changes by including repeatable instructions for
testing the PR.
- If you added tests, this can be as simple as the command you used to run the
Expand Down
57 changes: 1 addition & 56 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,62 +65,7 @@ We welcome contributions. To get started review the [guidelines](CONTRIBUTING.md

## Directory Structure

```
executorch
├── backends # Backend delegate implementations.
├── codegen # Tooling to autogenerate bindings between kernels and the runtime.
├── configurations
├── docs # Static docs tooling.
├── examples # Examples of various user flows, such as model export, delegates, and runtime execution.
├── exir # Ahead-of-time library: model capture and lowering APIs.
| ├── _serialize # Serialize final export artifact.
| ├── backend # Backend delegate ahead of time APIs
| ├── capture # Program capture.
| ├── dialects # Op sets for various dialects in the export process.
| ├── emit # Conversion from ExportedProgram to ExecuTorch execution instructions.
| ├── operator # Operator node manipulation utilities.
| ├── passes # Built-in compiler passes.
| ├── program # Export artifacts.
| ├── serde # Graph module serialization/deserialization.
| ├── verification # IR verification.
├── extension # Extensions built on top of the runtime.
| ├── android # ExecuTorch wrappers for Android apps.
| ├── apple # ExecuTorch wrappers for iOS apps.
| ├── aten_util # Converts to and from PyTorch ATen types.
| ├── data_loader # 1st party data loader implementations.
| ├── evalue_util # Helpers for working with EValue objects.
| ├── gguf_util # Tools to convert from the GGUF format.
| ├── kernel_util # Helpers for registering kernels.
| ├── memory_allocator # 1st party memory allocator implementations.
| ├── module # A simplified C++ wrapper for the runtime.
| ├── parallel # C++ threadpool integration.
| ├── pybindings # Python API for executorch runtime.
| ├── pytree # C++ and Python flattening and unflattening lib for pytrees.
| ├── runner_util # Helpers for writing C++ PTE-execution tools.
| ├── testing_util # Helpers for writing C++ tests.
| ├── training # Experimental libraries for on-device training
├── kernels # 1st party kernel implementations.
| ├── aten
| ├── optimized
| ├── portable # Reference implementations of ATen operators.
| ├── prim_ops # Special ops used in executorch runtime for control flow and symbolic primitives.
| ├── quantized
├── profiler # Utilities for profiling runtime execution.
├── runtime # Core C++ runtime.
| ├── backend # Backend delegate runtime APIs.
| ├── core # Core structures used across all levels of the runtime.
| ├── executor # Model loading, initialization, and execution.
| ├── kernel # Kernel registration and management.
| ├── platform # Layer between architecture specific code and portable C++.
├── schema # ExecuTorch PTE file format flatbuffer schemas.
├── scripts # Utility scripts for building libs, size management, dependency management, etc.
├── tools # Development tool management.
├── devtools # Model profiling, debugging, and introspection.
├── shim # Compatibility layer between OSS and Internal builds
├── test # Broad scoped end-to-end tests.
├── third-party # Third-party dependencies.
├── util # Various helpers and scripts.
```
Please refer to the [Codebase structure](CONTRIBUTING.md#codebase-structure) section of the [Contributing Guidelines](CONTRIBUTING.md) for more details.

## License
ExecuTorch is BSD licensed, as found in the LICENSE file.
11 changes: 11 additions & 0 deletions docs/source/using-executorch-building-from-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,17 @@ portability details.
pip install -e .
```

If C++ files are being modified, you will still have to reinstall ExecuTorch from source.

> **_WARNING:_**
> Some modules can't be imported directly in editable mode. This is a known [issue](https://github.com/pytorch/executorch/issues/9558) and we are actively working on a fix for this. To workaround this:
> ```bash
> # This will fail
> python -c "from executorch.exir import CaptureConfig"
> # But this will succeed
> python -c "from executorch.exir.capture import CaptureConfig"
> ```

> **_NOTE:_** Cleaning the build system
>
> When fetching a new version of the upstream repo (via `git fetch` or `git
Expand Down
Loading