Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the inference performance of the GPTQ model #9240

Open
Rssevenyu opened this issue Oct 10, 2024 · 4 comments
Open

Questions about the inference performance of the GPTQ model #9240

Rssevenyu opened this issue Oct 10, 2024 · 4 comments
Labels
performance Performance-related issues

Comments

@Rssevenyu
Copy link

Rssevenyu commented Oct 10, 2024

Why is it that when using a quantitative model for inference, the TTFT optimization is not obvious, but the overall inference efficiency is improved a lot? At the same time, the inference efficiency of gptq marlin is not as good as gptq? What is the reason?
Version Information:
vLLM Version: 0.6.2

Start-up Commands:
Non-quantized model:
python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 7807 --model /mnt/home/Qwen1.5_32B_Chat --trust-remote-code --served-model-name Qwen --gpu-memory-utilization 0.9 --tensor-parallel-size 2 --enforce-eager --max-model-len 8192 --enable-prefix-caching

Quantized model using GPTQ (without GPTQ Marlin kernel):
python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 7807 --model /mnt/home/Qwen1.5-32B-Chat-GPTQ-Int4 --trust-remote-code --served-model-name Qwen --gpu-memory-utilization 0.9 --tensor-parallel-size 2 --enforce-eager --max-model-len 8192 --enable-prefix-caching --quantization gptq

Quantized model using GPTQ Marlin kernel (automatic mode without specifying --quantization gptq):
python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 7807 --model /mnt/home/Qwen1.5-32B-Chat-GPTQ-Int4 --trust-remote-code --served-model-name Qwen --gpu-memory-utilization 0.9 --tensor-parallel-size 2 --enforce-eager --max-model-len 8192 --enable-prefix-caching
Test Setup:
The test script uses 4 concurrent requests with the same prompt for evaluation.

Metric Outputs:
Non-quantized Model:
Time to First Token (TTFT):
vllm:time_to_first_token_seconds_sum{model_name="Qwen"} 2.931025266647339
Time Per Output Token:
vllm:time_per_output_token_seconds_sum{model_name="Qwen"} 6.13854455947876

Quantized Model using GPTQ:
Time to First Token (TTFT):
vllm:time_to_first_token_seconds_sum{model_name="Qwen"} 2.7571163177490234
Time Per Output Token:
vllm:time_per_output_token_seconds_sum{model_name="Qwen"} 3.8764026165008545

Quantized Model using GPTQ Marlin:
Time to First Token (TTFT):
vllm:time_to_first_token_seconds_sum{model_name="Qwen"} 2.9693307876586914
Time Per Output Token:
vllm:time_per_output_token_seconds_sum{model_name="Qwen"} 4.670741319656372

@Rssevenyu Rssevenyu added the performance Performance-related issues label Oct 10, 2024
@LucasWilkinson
Copy link
Contributor

@Rssevenyu can you run python collect_env.py please? alot of this will depend on the device you are running on

@Rssevenyu
Copy link
Author

@Rssevenyu你能运行python collect_env.py吗?这很大程度上取决于你正在运行的设备

Of course, the following is my environmental information:
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.31

Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.14.2.el8_2.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40

Nvidia driver version: 535.104.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Gold 6430
Stepping: 8
Frequency boost: enabled
CPU MHz: 2599.999
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 128 MiB
L3 cache: 120 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.5.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pyzmq==23.2.0
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] transformers==4.45.2
[pip3] triton==3.0.0
[conda] _anaconda_depends 2023.09 py311_mkl_1
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpydoc 1.5.0 py311h06a4308_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-ml-py 12.560.30 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pyzmq 23.2.0 py311h6a678d5_0
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] transformers 4.45.2 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.1.dev238+ge2c6e0a82
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS NODE NODE PIX PIX SYS SYS 0-31,64-95 0 N/A
GPU1 SYS X SYS SYS SYS SYS PIX PIX 32-63,96-127 1 N/A
NIC0 NODE SYS X PIX NODE NODE SYS SYS
NIC1 NODE SYS PIX X NODE NODE SYS SYS
NIC2 PIX SYS NODE NODE X PIX SYS SYS
NIC3 PIX SYS NODE NODE PIX X SYS SYS
NIC4 SYS PIX SYS SYS SYS SYS X PIX
NIC5 SYS PIX SYS SYS SYS SYS PIX X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5

@WelkinNi
Copy link

I'm encountering another issue where VLLM reports that 'GPTQ is not fully optimized' when running GPTQ models. Additionally, on my machine, the GPTQ model does not seem to be faster compared to non-quantized models.

@ShiningMaker
Copy link

@Rssevenyu same question, have you figured out the reason now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues
Projects
None yet
Development

No branches or pull requests

4 participants