Skip to content

[Bug]: Error in inspecting model architecture 'Qwen3ForCausalLM' #20122

Open
@Paroxetinez

Description

@Paroxetinez

Your current environment

The output of python collect_env.py
==============================
        System Info
==============================
OS                           : Ubuntu 22.04.1 LTS (x86_64)
GCC version                  : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version                : Could not collect
CMake version                : version 4.0.3
Libc version                 : glibc-2.35

==============================
       PyTorch Info
==============================
PyTorch version              : 2.7.0+cu126
Is debug build               : False
CUDA used to build PyTorch   : 12.6
ROCM used to build PyTorch   : N/A

==============================
      Python Environment
==============================
Python version               : 3.10.0 (default, Mar  3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform              : Linux-6.8.0-60-generic-x86_64-with-glibc2.35

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : Could not collect
CUDA_MODULE_LOADING set to   : LAZY
GPU models and configuration : 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA T400
GPU 2: NVIDIA A100-SXM4-80GB

Nvidia driver version        : 575.57.08
cuDNN version                : Could not collect
HIP runtime version          : N/A
MIOpen runtime version       : N/A
Is XNNPACK available         : True

==============================
          CPU Info
==============================
架构:                                x86_64
CPU 运行模式:                        32-bit, 64-bit
Address sizes:                        46 bits physical, 57 bits virtual
字节序:                              Little Endian
CPU:                                  128
在线 CPU 列表:                       0-127
厂商 ID:                             GenuineIntel
型号名称:                            Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
CPU 系列:                            6
型号:                                106
每个核的线程数:                      2
每个座的核数:                        32
座:                                  2
步进:                                6
CPU 最大 MHz:                        3500.0000
CPU 最小 MHz:                        800.0000
BogoMIPS:                            5800.00
标记:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
虚拟化:                              VT-x
L1d 缓存:                            3 MiB (64 instances)
L1i 缓存:                            2 MiB (64 instances)
L2 缓存:                             80 MiB (64 instances)
L3 缓存:                             108 MiB (2 instances)
NUMA 节点:                           2
NUMA 节点0 CPU:                      0-31,64-95
NUMA 节点1 CPU:                      32-63,96-127
Vulnerability Gather data sampling:   Mitigation; Microcode
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

==============================
Versions of relevant libraries
==============================
[pip3] No relevant packages
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-cublas-cu12        12.6.4.1                 pypi_0    pypi
[conda] nvidia-cuda-cupti-cu12    12.6.80                  pypi_0    pypi
[conda] nvidia-cuda-nvrtc-cu12    12.6.77                  pypi_0    pypi
[conda] nvidia-cuda-runtime-cu12  12.6.77                  pypi_0    pypi
[conda] nvidia-cudnn-cu12         9.5.1.17                 pypi_0    pypi
[conda] nvidia-cufft-cu12         11.3.0.4                 pypi_0    pypi
[conda] nvidia-cufile-cu12        1.11.1.6                 pypi_0    pypi
[conda] nvidia-curand-cu12        10.3.7.77                pypi_0    pypi
[conda] nvidia-cusolver-cu12      11.7.1.2                 pypi_0    pypi
[conda] nvidia-cusparse-cu12      12.5.4.2                 pypi_0    pypi
[conda] nvidia-cusparselt-cu12    0.6.3                    pypi_0    pypi
[conda] nvidia-ml-py              12.575.51                pypi_0    pypi
[conda] nvidia-nccl-cu12          2.26.2                   pypi_0    pypi
[conda] nvidia-nvjitlink-cu12     12.6.85                  pypi_0    pypi
[conda] nvidia-nvtx-cu12          12.6.77                  pypi_0    pypi
[conda] pynvml                    12.0.0                   pypi_0    pypi
[conda] pyzmq                     26.4.0                   pypi_0    pypi
[conda] torch                     2.7.0                    pypi_0    pypi
[conda] torchaudio                2.7.0                    pypi_0    pypi
[conda] torchvision               0.22.0                   pypi_0    pypi
[conda] transformers              4.52.4                   pypi_0    pypi
[conda] triton                    3.3.0                    pypi_0    pypi

==============================
         vLLM Info
==============================
ROCM Version                 : Could not collect
Neuron SDK Version           : N/A
vLLM Version                 : 0.9.1
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
        GPU0    GPU1    GPU2    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    SYS     0-31,64-95      0               N/A
GPU1    NODE     X      SYS     0-31,64-95      0               N/A
GPU2    SYS     SYS      X      32-63,96-127    1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

==============================
     Environment Variables
==============================
LD_LIBRARY_PATH=/tmp/.mount_cursorQlmb6N/usr/lib:
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

The example usage from official site

from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

llm = LLM(model="facebook/opt-125m")

outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

[INFO|image_processing_auto.py:315] 2025-06-26 17:40:50,424 >> Could not locate the image processor configuration file, will try to use the model config instead.
ERROR 06-26 17:40:54 [registry.py:367] Error in inspecting model architecture 'Qwen3ForCausalLM'
ERROR 06-26 17:40:54 [registry.py:367] Traceback (most recent call last):
ERROR 06-26 17:40:54 [registry.py:367] File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 365, in _try_inspect_model_cls
ERROR 06-26 17:40:54 [registry.py:367] return model.inspect_model_cls()
ERROR 06-26 17:40:54 [registry.py:367] File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 336, in inspect_model_cls
ERROR 06-26 17:40:54 [registry.py:367] return _run_in_subprocess(
ERROR 06-26 17:40:54 [registry.py:367] File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 604, in _run_in_subprocess
ERROR 06-26 17:40:54 [registry.py:367] with open(output_filepath, "rb") as f:
ERROR 06-26 17:40:54 [registry.py:367] FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpj4bcxlr6/registry_output.tmp'
Traceback (most recent call last):
File "/data1/Kechen_Li/Sos/LLaMA-Factory-main/scripts/vllm_infer.py", line 141, in
fire.Fire(vllm_infer)
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/data1/Kechen_Li/Sos/LLaMA-Factory-main/scripts/vllm_infer.py", line 129, in vllm_infer
results = LLM(**engine_args).generate(inputs, sampling_params, lora_request=lora_request)
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 243, in init
self.llm_engine = LLMEngine.from_engine_args(
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 494, in from_engine_args
vllm_config = engine_args.create_engine_config(usage_context)
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1018, in create_engine_config
model_config = self.create_model_config()
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 910, in create_model_config
return ModelConfig(
File "/home/jitianbo/miniconda3/envs/llamafactory/lib/python3.10/site-packages/pydantic/_internal/_dataclasses.py", line 121, in init
s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
Value error, Model architectures ['Qwen3ForCausalLM'] failed to be inspected. Please check the logs for more details. [type=value_error, input_value=ArgsKwargs((), {'model': ..., 'model_impl': 'auto'}), input_type=ArgsKwargs]
For further information visit https://errors.pydantic.dev/2.10/v/value_error

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions