-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
Description
Your current environment
The output of python collect_env.py
Your output of `python collect_env.py` here
==============================
System Info
OS : Ubuntu 22.04.4 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.0.0
Libc version : glibc-2.35
==============================
PyTorch Info
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
Python version : 3.12.10 (main, Apr 9 2025, 08:55:05) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-142-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
Is CUDA available : True
CUDA runtime version : 12.4.131
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA A800 80GB PCIe
Nvidia driver version : 550.163.01
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 1
Stepping: 6
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 48 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.6
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.1.0.dev0
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==26.4.0
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.55.4
[pip3] triton==3.4.0
[conda] Could not collect
==============================
vLLM Info
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.10.2rc2.dev21+g87d9419fd (git sha: 87d9419fd)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-31 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
==============================
Environment Variables
NVIDIA_VISIBLE_DEVICES=GPU-95aa552b-d1ea-39a2-62ae-e0e44fc85aaf
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.20.5-1
CUDA_DEVICE_SM_LIMIT=0
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
CUDA_VERSION=12.4.0
CUDA_OVERSUBSCRIBE=true
CUDA_DEVICE_MEMORY_LIMIT_0=40000m
CUDA_DEVICE_MEMORY_SHARED_CACHE=/usr/local/vgpu/ec67f715-2b0c-4150-b5c3-a5c83ae9db15.cache
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
When EngineCore process crash, APIServer cannot stop, because loop = self.output_socket._get_loop() this line code cannot running.
(APIServer pid=215332) ERROR 09-05 01:15:59 [core_client.py:562] Engine core proc EngineCore_0 died unexpectedly, shutting down client.
(APIServer pid=215332) /usr/lib/python3.12/weakref.py:590: RuntimeWarning: No running event loop. zmq.asyncio should be used from within an asyncio loop.
(APIServer pid=215332) return info.func(*info.args, **(info.kwargs or {}))
(APIServer pid=215332) Exception in thread MPClientEngineMonitor:
(APIServer pid=215332) Traceback (most recent call last):
(APIServer pid=215332) File "/usr/lib/python3.12/threading.py", line 1075, in _bootstrap_inner
(APIServer pid=215332) self.run()
(APIServer pid=215332) File "/usr/lib/python3.12/threading.py", line 1012, in run
(APIServer pid=215332) self._target(*self._args, **self._kwargs)
(APIServer pid=215332) File "/root/code/vllm/vllm/v1/engine/core_client.py", line 565, in monitor_engine_cores
(APIServer pid=215332) _self.shutdown()
(APIServer pid=215332) File "/root/code/vllm/vllm/v1/engine/core_client.py", line 517, in shutdown
(APIServer pid=215332) self._finalizer()
(APIServer pid=215332) File "/usr/lib/python3.12/weakref.py", line 590, in __call__
(APIServer pid=215332) return info.func(*info.args, **(info.kwargs or {}))
(APIServer pid=215332) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=215332) File "/root/code/vllm/vllm/v1/engine/core_client.py", line 350, in __call__
(APIServer pid=215332) loop = self.output_socket._get_loop()
(APIServer pid=215332) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=215332) File "/usr/local/lib/python3.12/dist-packages/zmq/_future.py", line 59, in _get_loop
(APIServer pid=215332) current_loop = self._default_loop()
(APIServer pid=215332) ^^^^^^^^^^^^^^^^^^^^
(APIServer pid=215332) File "/usr/local/lib/python3.12/dist-packages/zmq/asyncio.py", line 116, in _default_loop
(APIServer pid=215332) return asyncio.get_event_loop()
(APIServer pid=215332) ^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=215332) File "/usr/lib/python3.12/asyncio/events.py", line 702, in get_event_loop
(APIServer pid=215332) raise RuntimeError('There is no current event loop in thread %r.'
(APIServer pid=215332) RuntimeError: There is no current event loop in thread 'MPClientEngineMonitor'.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.