Description
Your current environment
The output of python collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.22.1
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.7.0+cu126
Is debug build : False
CUDA used to build PyTorch : 12.6
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-144-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
Nvidia driver version : 570.133.20
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 28
On-line CPU(s) list: 0-27
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 6
BogoMIPS: 4589.21
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 672 KiB (14 instances)
L1i cache: 448 KiB (14 instances)
L2 cache: 17.5 MiB (14 instances)
L3 cache: 54 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-27
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Indirect target selection: Mitigation; Aligned branch/return thunks
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pyzmq==27.0.0
[pip3] torch==2.7.0
[pip3] torchaudio==2.7.0
[pip3] torchvision==0.22.0
[pip3] transformers==4.53.0
[pip3] triton==3.3.0
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.9.2.dev295+g4ab3ac285 (git sha: 4ab3ac285)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV2 0-27 0 N/A
GPU1 NV2 X 0-27 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
==============================
Environment Variables
==============================
CUDA_HOME=/usr/local/cuda-12.8
CUDA_HOME=/usr/local/cuda-12.8
LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
When running inference with Qwen2.5-72B-Instruct-AWQ, the model works fine on a single A100 GPU. However, when I scale to two A100 GPUs, I encounter the following error: RuntimeError: TopPSamplingFromProbs failed with error code an illegal memory access was encountered.
Start command:
#!/bin/bash
# 设置环境变量
export CUDA_VISIBLE_DEVICES=0,1
# 激活虚拟环境
source /home/lucas/envs/nlp-vllm/bin/activate
# 计算GPU数量
GPU_COUNT=$(echo $CUDA_VISIBLE_DEVICES | tr ',' '\n' | grep -c [0-9])
# MODEL=/data/modelscope/Qwen3-32B-AWQ
MODEL=/data/modelscope/Qwen2.5-72B-Instruct-AWQ
# 启动 vLLM serve 服务
vllm serve \
${MODEL} \
--port 18101 \
--served-model-name vllm-text \
--max-model-len 8192 \
--tensor-parallel-size ${GPU_COUNT} \
--gpu-memory-utilization 0.9
The error message:
"""
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] WorkerProc hit an exception.
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] Traceback (most recent call last):
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/executor/multiproc_executor.py", line 517, in worker_busy_loop
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_worker.py", line 308, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = self.model_runner.execute_model(scheduler_output,
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_model_runner.py", line 1431, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampler_output = self.sampler(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 52, in forward
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampled = self.sample(logits, sampling_metadata)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 118, in sample
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] random_sampled = self.topk_topp_sampler(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 104, in forward_cuda
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return flashinfer_sample(logits, k, p, generators)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 290, in flashinfer_sample
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] next_token_ids = flashinfer.sampling.top_k_top_p_sampling_from_logits(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 903, in top_k_top_p_sampling_from_logits
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 643, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return get_sampling_module().top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 131, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] module.top_p_sampling_from_probs.default(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/_ops.py", line 756, in call
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._op(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] WorkerProc hit an exception.
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] Traceback (most recent call last):
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/executor/multiproc_executor.py", line 517, in worker_busy_loop
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_worker.py", line 308, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = self.model_runner.execute_model(scheduler_output,
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_model_runner.py", line 1431, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampler_output = self.sampler(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 52, in forward
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampled = self.sample(logits, sampling_metadata)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 118, in sample
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] random_sampled = self.topk_topp_sampler(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 104, in forward_cuda
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return flashinfer_sample(logits, k, p, generators)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 290, in flashinfer_sample
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] next_token_ids = flashinfer.sampling.top_k_top_p_sampling_from_logits(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 903, in top_k_top_p_sampling_from_logits
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 643, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return get_sampling_module().top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 131, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] module.top_p_sampling_from_probs.default(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/_ops.py", line 756, in call
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._op(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] RuntimeError: TopPSamplingFromProbs failed with error code an illegal memory access was encountered
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] Traceback (most recent call last):
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/executor/multiproc_executor.py", line 517, in worker_busy_loop
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_worker.py", line 308, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = self.model_runner.execute_model(scheduler_output,
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_model_runner.py", line 1431, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampler_output = self.sampler(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 52, in forward
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampled = self.sample(logits, sampling_metadata)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 118, in sample
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] random_sampled = self.topk_topp_sampler(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 104, in forward_cuda
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return flashinfer_sample(logits, k, p, generators)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 290, in flashinfer_sample
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] next_token_ids = flashinfer.sampling.top_k_top_p_sampling_from_logits(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 903, in top_k_top_p_sampling_from_logits
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 643, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return get_sampling_module().top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 131, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] module.top_p_sampling_from_probs.default(
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] RuntimeError: TopPSamplingFromProbs failed with error code an illegal memory access was encountered
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] Traceback (most recent call last):
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/executor/multiproc_executor.py", line 517, in worker_busy_loop
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_worker.py", line 308, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] output = self.model_runner.execute_model(scheduler_output,
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return func(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/worker/gpu_model_runner.py", line 1431, in execute_model
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampler_output = self.sampler(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 52, in forward
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] sampled = self.sample(logits, sampling_metadata)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/sampler.py", line 118, in sample
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] random_sampled = self.topk_topp_sampler(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._call_impl(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return forward_call(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 104, in forward_cuda
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return flashinfer_sample(logits, k, p, generators)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/workspace/github/vllm/vllm/v1/sample/ops/topk_topp_sampler.py", line 290, in flashinfer_sample
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] next_token_ids = flashinfer.sampling.top_k_top_p_sampling_from_logits(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 903, in top_k_top_p_sampling_from_logits
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 643, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return get_sampling_module().top_p_sampling_from_probs(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/flashinfer/sampling.py", line 131, in top_p_sampling_from_probs
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] module.top_p_sampling_from_probs.default(
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/_ops.py", line 756, in call
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._op(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522] RuntimeError: TopPSamplingFromProbs failed with error code an illegal memory access was encountered
2025-06-27 17:14:23: (VllmWorker rank=1 pid=776294) ERROR 06-27 17:14:23 [multiproc_executor.py:522]
2025-06-27 17:14:23: ERROR 06-27 17:14:23 [multiproc_executor.py:522] File "/home/lucas/envs/nlp-vllm/lib/python3.12/site-packages/torch/_ops.py", line 756, in call
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] return self._op(*args, **kwargs)
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522] RuntimeError: TopPSamplingFromProbs failed with error code an illegal memory access was encountered
2025-06-27 17:14:23: (VllmWorker rank=0 pid=776293) ERROR 06-27 17:14:23 [multiproc_executor.py:522]
"""
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.