Description
🐛 Describe the bug
Unable to convert PrunaAI quantized Llama 3.2 3B model to executorch. Please see below. Thank you!
Step 1
Setup pruna library for CPU (Linux x86).
Pre-requisities: https://docs.pruna.ai/en/latest/setup/pip.html
Step 2
Here is a script to quantize Llama 3.2 3B model using PrunaAI library
https://gist.github.com/arjunprotego/dcc6dda3ea264c4d0d7e4731daa704a2
I get model.pt as a result of this.
Step 3
I then convert that to model.pth using the below script
https://gist.github.com/arjunprotego/242fa582dce9285bfd3706020e1b6041
Step 4
I then try to convert to executorch using
python -m examples.models.llama.export_llama --model "llama3_2" --checkpoint ../torch_dynamic/model.pth --params ../torch_dynamic/params.json -kv --use_sdpa_with_kv_cache -X -d fp32 --xnnpack-extended-ops --preq_mode 8da4w_output_8da8w --preq_group_size 32 --max_seq_length 2048 --max_context_length 2048 --preq_embedding_quantize 8,0 --metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001]}' --output_name "llama3_2_torch_dynamic.pte"
But I get this error:
Traceback (most recent call last):
File "/home/arjun/miniconda3/envs/executorch_qlora/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/arjun/miniconda3/envs/executorch_qlora/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/arjun/executorch/examples/models/llama/export_llama.py", line 34, in <module>
main() # pragma: no cover
File "/home/arjun/executorch/examples/models/llama/export_llama.py", line 30, in main
export_llama(args)
File "/home/arjun/executorch/examples/models/llama/export_llama_lib.py", line 541, in export_llama
builder = _export_llama(args)
File "/home/arjun/executorch/examples/models/llama/export_llama_lib.py", line 683, in _export_llama
builder_exported = _prepare_for_llama_export(args).export()
File "/home/arjun/executorch/examples/models/llama/export_llama_lib.py", line 573, in _prepare_for_llama_export
_load_llama_model(
File "/home/arjun/executorch/examples/models/llama/export_llama_lib.py", line 956, in _load_llama_model
EagerModelFactory.create_model(
File "/home/arjun/executorch/examples/models/model_factory.py", line 44, in create_model
model = model_class(**kwargs)
File "/home/arjun/executorch/examples/models/llama/model.py", line 126, in __init__
self.dtype = get_checkpoint_dtype(checkpoint)
File "/home/arjun/miniconda3/envs/executorch_qlora/lib/python3.10/site-packages/executorch/examples/models/checkpoint.py", line 64, in get_checkpoint_dtype
mismatched_dtypes = [
File "/home/arjun/miniconda3/envs/executorch_qlora/lib/python3.10/site-packages/executorch/examples/models/checkpoint.py", line 67, in <listcomp>
if value.dtype != dtype
AttributeError: 'torch.dtype' object has no attribute 'dtype'
Versions
$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-31-cloud-amd64-x86_64-with-glibc2.36
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.3
[conda] numpy 2.2.3 pypi_0 pypi
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Done
Status
Done