Skip to content

Error in pip editable mode in export_llama #9278

@iseeyuan

Description

@iseeyuan

🐛 Describe the bug

Repro: have a fresh ET git clone.

git clone https://github.com/pytorch/executorch.git
cd executorch

git submodule sync
git submodule update --init

./install_executorch.sh --editable

Run the command below and got the error:

python -m examples.models.llama.export_llama -p /Users/myuan/data/stories_110M/params.json -c /Users/myuan/data/stories_110M/stories110M.pt -X --xnnpack-extended-ops -qmode 8da4w -G 128 --use_kv_cache --use_sdpa_with_kv_cache --verbose --output_name test_sdpa_with_kv.pte 
Traceback (most recent call last):
  File "/Users/myuan/src/executorch/exir/dialects/_ops.py", line 100, in __getattr__
    parent_packet = getattr(self._op_namespace, op_name)
  File "/Users/myuan/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/_ops.py", line 1267, in __getattr__
    raise AttributeError(
AttributeError: '_OpNamespace' 'quantized_decomposed' object has no attribute 'quantize_per_channel'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/myuan/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/myuan/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/myuan/src/executorch/examples/models/llama/export_llama.py", line 20, in <module>
    from .export_llama_lib import build_args_parser, export_llama

The above command works well without the --editable option.
File "/Users/myuan/src/executorch/examples/models/llama/export_llama_lib.py", line 25, in
from executorch.backends.vulkan._passes.remove_asserts import remove_asserts
File "/Users/myuan/src/executorch/backends/vulkan/init.py", line 7, in
from .partitioner.vulkan_partitioner import VulkanPartitioner
File "/Users/myuan/src/executorch/backends/vulkan/partitioner/vulkan_partitioner.py", line 16, in
from executorch.backends.vulkan.op_registry import (
File "/Users/myuan/src/executorch/backends/vulkan/op_registry.py", line 225, in
exir_ops.edge.quantized_decomposed.quantize_per_channel.default,
File "/Users/myuan/src/executorch/exir/dialects/_ops.py", line 104, in getattr
raise AttributeError(
AttributeError: '_OpNamespace' 'edge.quantized_decomposed' object has no attribute 'quantize_per_channel'
ERROR conda.cli.main_run:execute(47): conda run python -m examples.models.llama.export_llama -p /Users/myuan/data/stories_110M/params.json -c /Users/myuan/data/stories_110M/stories110M.pt -X --xnnpack-extended-ops -qmode 8da4w -G 128 --use_kv_cache --use_sdpa_with_kv_cache --verbose --output_name test_sdpa_with_kv.pte failed. (See above for error)

Process finished with exit code 1

If I do

Versions

Collecting environment information...
PyTorch version: 2.7.0.dev20250311
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.31.4
Libc version: N/A

Python version: 3.10.16 (main, Dec 11 2024, 10:22:29) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] executorch==0.6.0a0+9a0c2db
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==24.4.26
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] torch==2.7.0.dev20250311
[pip3] torchao==0.10.0+git7d879462
[pip3] torchaudio==2.6.0.dev20250311
[pip3] torchsr==1.0.4
[pip3] torchtune==0.5.0
[pip3] torchvision==0.22.0.dev20250311
[conda] executorch 0.6.0a0+9a0c2db pypi_0 pypi
[conda] numpy 2.2.2 pypi_0 pypi
[conda] torch 2.7.0.dev20250311 pypi_0 pypi
[conda] torchao 0.10.0+git7d879462 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250311 pypi_0 pypi
[conda] torchfix 0.6.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.5.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250311 pypi_0 pypi

cc @larryliu0820 @jathu @lucylq

Metadata

Metadata

Labels

module: build/installIssues related to the cmake and buck2 builds, and to installing ExecuTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Done

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions