Skip to content

Commit

Permalink
Update name from xtensa to cadence (#2982)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #2982

As titled.

Reviewed By: cccclai

Differential Revision: D55998135

fbshipit-source-id: a57bd233afe170290c7def4406d6d6e769d467ed
  • Loading branch information
mcremon-meta authored and facebook-github-bot committed Apr 11, 2024
1 parent c7fd394 commit 7b8343b
Show file tree
Hide file tree
Showing 33 changed files with 73 additions and 73 deletions.
28 changes: 14 additions & 14 deletions docs/source/build-run-xtensa.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Step 2. Make sure you have completed the ExecuTorch setup tutorials linked to at
The working tree is:

```
examples/xtensa/
examples/cadence/
├── aot
├── kernels
├── ops
Expand All @@ -75,7 +75,7 @@ examples/xtensa/

***AoT (Ahead-of-Time) Components***:

The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/examples/xtensa/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/examples/xtensa/aot/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/examples/xtensa/aot/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [meta_registrations.py](https://github.com/pytorch/executorch/blob/main/examples/xtensa/aot/meta_registrations.py) and have corresponding implemetations in the other folders.
The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/examples/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/examples/cadence/aot/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/examples/cadence/aot/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [meta_registrations.py](https://github.com/pytorch/executorch/blob/main/examples/cadence/aot/meta_registrations.py) and have corresponding implemetations in the other folders.

***Operators***:

Expand All @@ -101,14 +101,14 @@ python3 -m examples.portable.scripts.export --model_name="add"
***Quantized Operators***:

The other, more complex model are custom operators, including:
- a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/xtensa/tests/quantized_linear_example.py#L28). Linear is the backbone of most Automatic Speech Recognition (ASR) models.
- a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/xtensa/tests/quantized_conv1d_example.py#L36). Convolutions are important in wake word and many denoising models.
- a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/tests/quantized_linear_example.py#L28). Linear is the backbone of most Automatic Speech Recognition (ASR) models.
- a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/tests/quantized_conv1d_example.py#L36). Convolutions are important in wake word and many denoising models.

In both cases the generated file is called `XtensaDemoModel.pte`.

```bash
cd executorch
python3 -m examples.xtensa.tests.quantized_<linear,conv1d>_example
python3 -m examples.cadence.tests.quantized_<linear,conv1d>_example
```

***Small Model: RNNT predictor***:
Expand All @@ -118,7 +118,7 @@ The predictor is a sequence of basic ops (embedding, ReLU, linear, layer norm) a

```bash
cd executorch
python3 -m examples.xtensa.tests.rnnt_predictor_quantized_example
python3 -m examples.cadence.tests.rnnt_predictor_quantized_example
```

The generated file is called `XtensaDemoModel.pte`.
Expand All @@ -131,7 +131,7 @@ In this step, you'll be building the DSP firmware image that consists of the sam
***Step 1***. Configure the environment variables needed to point to the Xtensa toolchain that you have installed in the previous step. The three environment variables that need to be set include:
```bash
# Directory in which the Xtensa toolchain was installed
export XTENSA_TOOLCHAIN=/home/user_name/xtensa/XtDevTools/install/tools
export XTENSA_TOOLCHAIN=/home/user_name/cadence/XtDevTools/install/tools
# The version of the toolchain that was installed. This is essentially the name of the directory
# that is present in the XTENSA_TOOLCHAIN directory from above.
export TOOLCHAIN_VER=RI-2021.8-linux
Expand All @@ -151,7 +151,7 @@ cd executorch
rm -rf cmake-out
# prebuild and install executorch library
cmake -DBUCK2=buck2 \
-DCMAKE_TOOLCHAIN_FILE=<path_to_executorch>/examples/xtensa/xtensa.cmake \
-DCMAKE_TOOLCHAIN_FILE=<path_to_executorch>/examples/cadence/cadence.cmake \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Debug \
-DPYTHON_EXECUTABLE=python3 \
Expand All @@ -165,18 +165,18 @@ cmake -DBUCK2=buck2 \
-Bcmake-out .

cmake --build cmake-out -j8 --target install --config Debug
# build xtensa runner
# build cadence runner
cmake -DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_TOOLCHAIN_FILE=<path_to_executorch>/examples/xtensa/xtensa.cmake \
-DCMAKE_TOOLCHAIN_FILE=<path_to_executorch>/examples/cadence/cadence.cmake \
-DCMAKE_PREFIX_PATH=<path_to_executorch>/cmake-out \
-DMODEL_PATH=<path_to_program_file_generated_in_previous_step> \
-DNXP_SDK_ROOT_DIR=<path_to_nxp_sdk_root> -DEXECUTORCH_BUILD_FLATC=0 \
-DFLATC_EXECUTABLE="$(which flatc)" \
-DNN_LIB_BASE_DIR=<path_to_nnlib_cloned_in_step_2> \
-Bcmake-out/examples/xtensa \
examples/xtensa
-Bcmake-out/examples/cadence \
examples/cadence

cmake --build cmake-out/examples/xtensa -j8 -t xtensa_executorch_example
cmake --build cmake-out/examples/cadence -j8 -t cadence_executorch_example
```

After having succesfully run the above step you should see two binary files in their CMake output directory.
Expand Down Expand Up @@ -213,6 +213,6 @@ First 20 elements of output 0

In this tutorial, you have learned how to export a quantized operation, build the ExecuTorch runtime and run this model on the Xtensa HiFi4 DSP chip.

The (quantized linear) model in this tutorial is a typical operation appearing in ASR models, and can be extended to a complete ASR model by creating the model as a new test and adding the needed operators/kernels to [operators](https://github.com/pytorch/executorch/blob/main/examples/xtensa/ops) and [kernels](https://github.com/pytorch/executorch/blob/main/examples/xtensa/kernels).
The (quantized linear) model in this tutorial is a typical operation appearing in ASR models, and can be extended to a complete ASR model by creating the model as a new test and adding the needed operators/kernels to [operators](https://github.com/pytorch/executorch/blob/main/examples/cadence/ops) and [kernels](https://github.com/pytorch/executorch/blob/main/examples/cadence/kernels).

Other models can be created following the same structure, always assuming that operators and kernels are available.
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ if(NOT CMAKE_CXX_STANDARD)
endif()

# Set the project name.
project(xtensa_executorch_example)
project(cadence_executorch_example)

# Source root directory for executorch.
if(NOT EXECUTORCH_ROOT)
Expand Down Expand Up @@ -100,21 +100,21 @@ add_custom_command(

add_custom_target(gen_model_header DEPENDS ${CMAKE_BINARY_DIR}/model_pte.h)

add_executable(xtensa_executorch_example executor_runner.cpp)
add_dependencies(xtensa_executorch_example gen_model_header)
add_executable(cadence_executorch_example executor_runner.cpp)
add_dependencies(cadence_executorch_example gen_model_header)

# lint_cmake: -linelength
target_include_directories(xtensa_executorch_example PUBLIC ${ROOT_DIR}/..
target_include_directories(cadence_executorch_example PUBLIC ${ROOT_DIR}/..
${CMAKE_BINARY_DIR}
${_common_include_directories})

target_link_options(xtensa_executorch_example PRIVATE
target_link_options(cadence_executorch_example PRIVATE
-mlsp=${NXP_SDK_ROOT_DIR}/devices/MIMXRT685S/xtensa/min-rt)
target_link_libraries(xtensa_executorch_example dsp_mu_polling_libs
xtensa_ops_lib extension_runner_util executorch)
target_link_libraries(cadence_executorch_example dsp_mu_polling_libs
cadence_ops_lib extension_runner_util executorch)

add_custom_command(
TARGET xtensa_executorch_example
TARGET cadence_executorch_example
POST_BUILD
COMMAND
${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_LIST_DIR}/utils/post_compilation.py
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -17,20 +17,20 @@

from .compiler import export_to_edge
from .quantizer import (
CadenceBaseQuantizer,
QuantFusion,
ReplacePT2DequantWithXtensaDequant,
ReplacePT2QuantWithXtensaQuant,
XtensaBaseQuantizer,
ReplacePT2DequantWithCadenceDequant,
ReplacePT2QuantWithCadenceQuant,
)


FORMAT = "[%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s"
logging.basicConfig(level=logging.INFO, format=FORMAT)


def export_xtensa_model(model, example_inputs):
def export_model(model, example_inputs):
# Quantizer
quantizer = XtensaBaseQuantizer()
quantizer = CadenceBaseQuantizer()

# Export
model_exp = capture_pre_autograd_graph(model, example_inputs)
Expand All @@ -42,24 +42,24 @@ def export_xtensa_model(model, example_inputs):
# Convert
converted_model = convert_pt2e(prepared_model)

# pyre-fixme[16]: Pyre doesn't get that XtensaQuantizer has a patterns attribute
# pyre-fixme[16]: Pyre doesn't get that CadenceQuantizer has a patterns attribute
patterns = [q.pattern for q in quantizer.quantizers]
QuantFusion(patterns)(converted_model)

# Get edge program (note: the name will change to export_to_xtensa in future PRs)
# Get edge program (note: the name will change to export_to_cadence in future PRs)
edge_prog_manager = export_to_edge(converted_model, example_inputs, pt2_quant=True)

# Run a couple required passes for quant/dequant ops
xtensa_prog_manager = edge_prog_manager.transform(
[ReplacePT2QuantWithXtensaQuant(), ReplacePT2DequantWithXtensaDequant()],
cadence_prog_manager = edge_prog_manager.transform(
[ReplacePT2QuantWithCadenceQuant(), ReplacePT2DequantWithCadenceDequant()],
check_ir_validity=False,
)

exec_prog = xtensa_prog_manager.to_executorch()
exec_prog = cadence_prog_manager.to_executorch()

logging.info(
f"Final exported graph module:\n{exec_prog.exported_program().graph_module}"
)

# Save the program as XtensaDemoModel.pte
save_pte_program(exec_prog, "XtensaDemoModel")
# Save the program as CadenceDemoModel.pte
save_pte_program(exec_prog, "CadenceDemoModel")
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

from .utils import get_conv1d_output_size

lib = Library("xtensa", "DEF")
lib = Library("cadence", "DEF")

lib.define(
"quantize_per_tensor(Tensor input, float scale, int zero_point, int quant_min, int quant_max, ScalarType dtype) -> (Tensor Z)"
Expand Down Expand Up @@ -56,7 +56,7 @@
"quantized_conv.out(Tensor input, Tensor weight, Tensor bias, int[] stride, SymInt[] padding, int[] dilation, int groups, int input_zero_point, Tensor weight_zero_point, Tensor bias_scale, float out_scale, int out_zero_point, Tensor out_multiplier, Tensor out_shift, bool channel_last=False, *, Tensor(a!) out) -> Tensor(a!)"
)

m = Library("xtensa", "IMPL", "Meta")
m = Library("cadence", "IMPL", "Meta")


@impl(m, "quantize_per_tensor")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -437,7 +437,7 @@ def get_anchors(
)

def replacement_op(self):
return torch.ops.xtensa.quantized_linear.default
return torch.ops.cadence.quantized_linear.default


class LinearFunctionalPattern(QuantizationPattern):
Expand All @@ -457,7 +457,7 @@ def get_anchors(
)

def replacement_op(self):
return torch.ops.xtensa.quantized_linear.default
return torch.ops.cadence.quantized_linear.default


class LayerNormPattern(QuantizationPattern):
Expand All @@ -476,7 +476,7 @@ def get_anchors(self, gm, fused_partition) -> PartitionAnchors:
)

def replacement_op(self):
return torch.ops.xtensa.quantized_layer_norm.default
return torch.ops.cadence.quantized_layer_norm.default


class Conv1dPattern(QuantizationPattern):
Expand All @@ -503,7 +503,7 @@ def get_anchors(
)

def replacement_op(self):
return torch.ops.xtensa.quantized_conv.default
return torch.ops.cadence.quantized_conv.default


class Conv2dPattern(QuantizationPattern):
Expand All @@ -530,7 +530,7 @@ def get_anchors(
)

def replacement_op(self):
return torch.ops.xtensa.quantized_conv.default
return torch.ops.cadence.quantized_conv.default


class AddmmPattern(QuantizationPattern):
Expand All @@ -550,7 +550,7 @@ def get_anchors(
)

def replacement_op(self):
return torch.ops.xtensa.quantized_linear.default
return torch.ops.cadence.quantized_linear.default


class ReluPattern(QuantizationPattern):
Expand All @@ -573,7 +573,7 @@ def get_anchors(
)

def replacement_op(self):
return torch.ops.xtensa.quantized_relu.default
return torch.ops.cadence.quantized_relu.default


class GenericQuantizer(Quantizer):
Expand Down Expand Up @@ -657,7 +657,7 @@ def get_supported_operators(cls) -> List[OperatorConfig]:
)


class XtensaBaseQuantizer(ComposableQuantizer):
class CadenceBaseQuantizer(ComposableQuantizer):
def __init__(self):
static_qconfig = QuantizationConfig(
act_qspec,
Expand Down Expand Up @@ -821,34 +821,34 @@ def mark_fused(cls, nodes) -> bool:
n.meta["QuantFusion"] = True


class ReplacePT2QuantWithXtensaQuant(ExportPass):
class ReplacePT2QuantWithCadenceQuant(ExportPass):
"""
Replace the pt2 quantization ops with custom xtensa quantization ops.
Replace the pt2 quantization ops with custom cadence quantization ops.
"""

def call_operator(self, op, args, kwargs, meta):
if op not in {exir_ops.edge.quantized_decomposed.quantize_per_tensor.default}:
return super().call_operator(op, args, kwargs, meta)

return super().call_operator(
exir_ops.edge.xtensa.quantize_per_tensor.default,
exir_ops.edge.cadence.quantize_per_tensor.default,
args,
kwargs,
meta,
)


class ReplacePT2DequantWithXtensaDequant(ExportPass):
class ReplacePT2DequantWithCadenceDequant(ExportPass):
"""
Replace the pt2 dequantization ops with custom xtensa dequantization ops.
Replace the pt2 dequantization ops with custom cadence dequantization ops.
"""

def call_operator(self, op, args, kwargs, meta):
if op not in {exir_ops.edge.quantized_decomposed.dequantize_per_tensor.default}:
return super().call_operator(op, args, kwargs, meta)

return super().call_operator(
exir_ops.edge.xtensa.dequantize_per_tensor.default,
exir_ops.edge.cadence.dequantize_per_tensor.default,
args,
kwargs,
meta,
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
* This is a simple executor_runner that boots up the DSP, configures the serial
* port, sends a bunch of test messages to the M33 core and then loads the model
* defined in model_pte.h. It runs this model using the ops available in
* xtensa/ops directory.
* cadence/ops directory.
*/

#include <fsl_debug_console.h>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
# LICENSE file in the root directory of this source tree.

# lint_cmake: -linelength
add_library(xtensa_kernels kernels.cpp ${EXECUTORCH_ROOT}/examples/xtensa/third-party/nnlib-hifi4/matmul_asym8uxasym8u_asym8u.cpp)
add_library(cadence_kernels kernels.cpp ${EXECUTORCH_ROOT}/examples/cadence/third-party/nnlib-hifi4/matmul_asym8uxasym8u_asym8u.cpp)

target_include_directories(
xtensa_kernels
cadence_kernels
PUBLIC .
${NN_LIB_BASE_DIR}/xa_nnlib/algo/common/include/
${NN_LIB_BASE_DIR}/xa_nnlib/include/nnlib
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -31,16 +31,16 @@ set(_aten_ops__srcs
"${CMAKE_CURRENT_SOURCE_DIR}/op_view_copy.cpp"
"${EXECUTORCH_ROOT}/kernels/portable/cpu/util/broadcast_util.cpp"
"${EXECUTORCH_ROOT}/kernels/portable/cpu/util/repeat_util.cpp")
add_library(aten_ops_xtensa ${_aten_ops__srcs})
target_link_libraries(aten_ops_xtensa PUBLIC executorch)
target_link_libraries(aten_ops_xtensa PRIVATE xtensa_kernels)
add_library(aten_ops_cadence ${_aten_ops__srcs})
target_link_libraries(aten_ops_cadence PUBLIC executorch)
target_link_libraries(aten_ops_cadence PRIVATE cadence_kernels)

# Let files say "include <executorch/path/to/header.h>".
set(_common_include_directories ${EXECUTORCH_ROOT}/..)

target_include_directories(aten_ops_xtensa PUBLIC ${ROOT_DIR}/..
${CMAKE_BINARY_DIR}
${_common_include_directories})
target_include_directories(aten_ops_cadence PUBLIC ${ROOT_DIR}/..
${CMAKE_BINARY_DIR}
${_common_include_directories})

# Custom ops that are needed to run the test model.
add_library(
Expand All @@ -52,7 +52,7 @@ target_include_directories(custom_ops PUBLIC ${ROOT_DIR}/..
${_common_include_directories})

target_link_libraries(custom_ops PUBLIC executorch)
target_link_libraries(custom_ops PRIVATE xtensa_kernels)
target_link_libraries(custom_ops PRIVATE cadence_kernels)

# Generate C++ bindings to register kernels into both PyTorch (for AOT) and
# Executorch (for runtime). Here select all ops in functions.yaml
Expand All @@ -62,6 +62,6 @@ generate_bindings_for_kernels(
message("Generated files ${gen_command_sources}")

gen_operators_lib(
"xtensa_ops_lib"
"cadence_ops_lib"
KERNEL_LIBS custom_ops
DEPS aten_ops_xtensa)
DEPS aten_ops_cadence)
Loading

0 comments on commit 7b8343b

Please sign in to comment.