Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Deepseek-V2 #4650

Merged
merged 19 commits into from
Jun 28, 2024
Merged

Support Deepseek-V2 #4650

merged 19 commits into from
Jun 28, 2024

Conversation

zwd003
Copy link
Contributor

@zwd003 zwd003 commented May 7, 2024

Description:

This PR introduces support for the recently released DeepSeek-V2 model by DeepSeek-AI.

Key Updates:

  • Model Integration: Successfully integrated the DeepSeek-V2 model, developed by the DeepSeek-AI team, aiming to provide advanced natural language processing capabilities.

Related Resources:

Todo:

  • Efficient Inference Mode: Implement the efficient inference mode described in the paper.

We look forward to community feedback and suggestions to help us continuously improve and refine the integration and inference implementation of the DeepSeek-V2 model.

Testing

from vllm import LLM, SamplingParams

# Sample prompts.
prompts = [
    "User: The future of AI is? Assistant:"
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.0, top_p=1, max_tokens=32)

# Create an LLM.
llm = LLM(model="deepseek-ai/DeepSeek-V2-Chat", tensor_parallel_size=8, max_num_seqs = 1, max_model_len = 1024, trust_remote_code=True, enforce_eager = True)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Prompt: 'User: The future of AI is? Assistant:', Generated text: ' The future of AI, or Artificial Intelligence, is a topic of much speculation and debate. AI has the potential to revolutionize many aspects of our lives, from'

Note: Currently, only the inference method using the Multi-Head Attention (MHA) approach has been implemented, and the efficient inference mode mentioned in the paper has not yet been realized.

@guanjingyu
Copy link

ERROR 05-08 20:22:08 worker_base.py:145] ValueError: Model architectures ['DeepseekV2ForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LlavaForConditionalGeneration', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'XverseForCausalLM']

@guanjingyu
Copy link

it seems the model architecture is not supported in vLLM

@rkooo567
Copy link
Collaborator

rkooo567 commented May 8, 2024

Currently, only the inference method using the Multi-Head Attention (MHA) approach has been implemented, and the efficient inference mode mentioned in the paper has not yet been realized.

What's the reason it is not supported in this PR?

@HappyLynn
Copy link

Hi, with only MHA, is it possible to realize max_model_len = 128k? In my test, may only 12k.

@zhyncs
Copy link
Contributor

zhyncs commented May 10, 2024

What's the reason it is not supported in this PR?

The internal inference implementation supports MLA. The implementation on vLLM is more about making it support quickly and matching the model parameters with the code. So the efficiency of using it for LLM Serving is not high enough. I think maybe the current PR could be quickly reviewed and merged asap. Subsequent communities can consider implementing an integrated version.

@zhyncs
Copy link
Contributor

zhyncs commented May 10, 2024

Hi @zwd003 May you merge the latest main branch and fix the conflicts? Thanks.

@younggee123456
Copy link

请问一下目前是否有在开发支持MLA吗

@zwd003 zwd003 reopened this May 11, 2024
@zwd003
Copy link
Contributor Author

zwd003 commented May 11, 2024

Hi @zwd003 May you merge the latest main branch and fix the conflicts? Thanks.

ok

@lyl0404
Copy link

lyl0404 commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!

(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states,
(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

@haiasd
Copy link

haiasd commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!

(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

I encountered the same error

@haiasd
Copy link

haiasd commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!

(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

git checkout 5688e58ca2797a34bd56e75c045d41be6aca1e2b solved this problem

@lyl0404
Copy link

lyl0404 commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!
(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

git checkout 5688e58ca2797a34bd56e75c045d41be6aca1e2b solved this problem

Thanks! :D

@zhangyu68
Copy link

Hi @zwd003 May you merge the latest main branch and fix the conflicts? Thanks.

ok

hello,I encountered this error when the QPS was increased to 2.

[' 根据指令"周日晚上",我们将按照步骤进行处理:\n\n1. 选择']
INFO:werkzeug:172.16.178.41 - - [13/May/2024 12:31:52] "POST /get_data HTTP/1.1" 200 -
Processed prompts:   0%|                                                                                                                                                                            | 0/1 [00:00<?, ?it/s](RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Error executing method execute_model. This might cause deadlock in distributed execution.                                                        | 0/2 [00:00<?, ?it/s]
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return executor(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/worker/worker.py", line 249, in execute_model
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     output = self.model_runner.execute_model(seq_group_metadata_list,
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/worker/model_runner.py", line 787, in execute_model
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     ) = self.prepare_input_tensors(seq_group_metadata_list)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/worker/model_runner.py", line 729, in prepare_input_tensors
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     input_tokens = metadata_dict.pop("input_tokens")
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] KeyError: 'input_tokens'
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Error executing method execute_model. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return executor(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/worker/worker.py", line 237, in execute_model
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     data = broadcast_tensor_dict(src=0)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/huj11@xiaopeng.com/code/vllm/vllm/distributed/communication_op.py", line 216, in broadcast_tensor_dict
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     torch.distributed.broadcast_object_list(recv_metadata_list,
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2674, in broadcast_object_list
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     object_list[i] = _tensor_to_object(obj_view, obj_size, group)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2362, in _tensor_to_object
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return _unpickler(io.BytesIO(buf)).load()
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] _pickle.UnpicklingError: invalid load key, '\xea'.
(RayWorkerWrapper pid=1542773) INFO 05-13 12:26:25 model_runner.py:175] Loading model weights took 56.1087 GB [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Connected all trees [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Using non-device net plugin version 0 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO comm 0x55f8f5a608b0 rank 7 nranks 8 cudaDev 7 nvmlDev 7 busId b3000 commId 0x7b5f29ff7a9fb9f5 - Init START [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO NVLS multicast support is not available on dev 7 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO comm 0x55f8f5a608b0 rank 7 nRanks 8 nNodes 1 localRanks 8 localRank 7 MNNVL 0 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO 16 coll channels, 0 collnet channels, 0 nvls channels, 16 p2p channels, 16 p2p channels per peer [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO comm 0x55f8f5a608b0 rank 7 nranks 8 cudaDev 7 nvmlDev 7 busId b3000 commId 0x7b5f29ff7a9fb9f5 - Init COMPLETE [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2076947 [7] NCCL INFO Channel 15/1 : 7[7] -> 0[0] via P2P/CUMEM/read [repeated 336x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Connected all rings [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Using network IB [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO bootstrapSplit: comm 0x55f8f5a608b0 parent 0x55f8e5006f90 rank 7 nranks 8 color -934961569 key 7 prev 6 next 0 - DONE [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Setting affinity for GPU 7 to ffffffff,00000000,ffffffff,00000000 [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6 [2] -1/-1/-1->7->6 [3] -1/-1/-1->7->6 [4] -1/-1/-1->7->6 [5] -1/-1/-1->7->6 [6] -1/-1/-1->7->6 [7] -1/-1/-1->7->6 [8] -1/-1/-1->7->6 [9] -1/-1/-1->7->6 [10] -1/-1/-1->7->6 [11] -1/-1/-1->7->6 [12] -1/-1/-1->7->6 [13] -1/-1/-1->7->6 [14] -1/-1/-1->7->6 [15] -1/-1/-1->7->6 [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO P2P Chunksize set to 524288 [repeated 6x across cluster]

@ftgreat
Copy link
Contributor

ftgreat commented May 14, 2024

Could you show me lines about KV compression? Thanks.

@fxgeoffrey
Copy link

加载模型时报如下错误:

Cache shape torch.Size([163840, 64]) [repeated 6x across cluster]
INFO 05-14 22:41:26 model_runner.py:166] Loading model weights took 56.1087 GB
/tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint64_array’:
/tmp/tmpw9q1ie7x/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
for (Py_ssize_t i = 0; i < len; i++) {
^
/tmp/tmpw9q1ie7x/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code
/tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint32_array’:
/tmp/tmpw9q1ie7x/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
for (Py_ssize_t i = 0; i < len; i++) {
^
ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last):
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method
ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks
ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run
ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe
ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts
ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid](
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in
ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run
ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr
ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj
ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver
ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init
ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init
ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build
ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call
ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd)
ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
python-BaseException
Traceback (most recent call last):
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 146, in execute_method
raise e
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method
return executor(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks
self.model_runner.profile_run()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run
self.execute_model(seqs, kv_caches)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model
hidden_states = model_executable(**execute_model_kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
hidden_states = self.model(input_ids, positions, kv_caches,
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
hidden_states, residual = layer(positions, hidden_states,
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
hidden_states = self.mlp(hidden_states)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
final_hidden_states = fused_moe(hidden_states,
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe
return fused_experts(hidden_states,
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts
invoke_fused_moe_kernel(hidden_states,
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
fused_moe_kernel[grid](
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run
device = driver.get_current_device()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr
self._initialize_obj()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj
self._obj = self._init_fn()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver
return CudaDriver()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init
self.utils = CudaUtils()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init
so = _build("cuda_utils", src_path, tmpdir)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build
ret = subprocess.check_call(cc_cmd)
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid](
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmps4n0c8gr/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmps4n0c8gr', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmps4n0c8gr/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
(RayWorkerWrapper pid=66371) INFO 05-14 22:41:25 model_runner.py:166] Loading model weights took 56.1087 GB [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpezsumgls/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpezsumgls', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpezsumgls/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint64_array’:
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
(RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) {
(RayWorkerWrapper pid=65639) ^
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint32_array’:
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
(RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) {
(RayWorkerWrapper pid=65639) ^
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint64_array’:
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint32_array’:
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 18x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) [repeated 18x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) [repeated 6x across cluster]
(RayWorkerWrapper pid=66276) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp4yg1ha_1/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmp4yg1ha_1', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp4yg1ha_1/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. [repeated 5x across cluster]
(RayWorkerWrapper pid=66276) /tmp/tmp4yg1ha_1/main.c: In function ‘list_to_cuuint32_array’: [repeated 10x across cluster]
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) for (Py_ssize_t i = 0; i < len; i++) { [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) ^ [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code [repeated 6x across cluster]
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1443, in _kill_process_type
self._kill_process_impl(
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1499, in _kill_process_impl
process.wait(timeout_seconds)
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1189, in wait
return self._wait(timeout=timeout)
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1927, in _wait
time.sleep(delay)
KeyboardInterrupt
[rank0]:[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Process finished with exit code 1

@ericg108
Copy link

any update? looking forward to it..

vllm/config.py Outdated
@@ -250,6 +250,9 @@ def get_hidden_size(self) -> int:
return self.hf_text_config.hidden_size

def get_head_size(self) -> int:
if hasattr(self.hf_text_config, "model_type") and self.hf_text_config.model_type=='deepseek_v2':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the head_dim to the huggingface config instead of hard coding this here?

@XintianHan
Copy link

Does anybody have this error? Looking for help. Not sure what's happening here.

ERROR 07-02 20:01:56 multiproc_worker_utils.py:226] FileNotFoundError: [Errno 2] No such file or directory: '/home/tiger/.triton/cache/553406382df8add69b38e9a944d5bf22/fused_moe_kernel.cubin.tmp.pid_440913_441001'

I finally figured out that one previous commit works without this error. Just try

git clone https://github.com/zwd003/vllm.git
cd vllm
git checkout 28199d88a6b1a20c562bea4ee498874b009c67a5
pip3 install -e .

@mphilippnv
Copy link

mphilippnv commented Jul 18, 2024

Has anybody had luck running The 236b Deepseek V2 on VLLM yet? I was able to get the Lite Instruct model to run, but I can't get the full instruct model to run, even on 8 H100's. I get told there's no available memory for cache blocks. Been using these params:

--model deepseek-ai/DeepSeek-Coder-V2-Instruct --trust-remote-code --max-seq-len-to-capture 64000 --max-model-len 64000 --device cuda --gpu-memory-utilization 0.95 --tensor-parallel-size 8 --distributed-executor-backend ray --enforce-eager

Example error from logs:

[rank0]: File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]: engine = cls(

    �[36m(RayWorkerWrapper pid=12002)�[0m ERROR 07-18 11:37:25 worker_base.py:340] ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
[rank0]: return engine_class(*args, **kwargs)
ERROR 07-18 11:37:25 worker_base.py:340] raise ValueError("No available memory for the cache blocks. "
[rank0]: self._run_workers("initialize_cache",
ERROR 07-18 11:37:25 worker_base.py:340] ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
�[36m(RayWorkerWrapper pid=12002)�[0m ERROR 07-18 11:37:25 worker_base.py:340] raise ValueError("No available memory for the cache blocks. "
ERROR 07-18 11:37:25 worker_base.py:340] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 367, in raise_if_cache_size_invalid
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 375, in _initialize_kv_caches
[rank0]: self._initialize_kv_caches()
�[36m(RayWorkerWrapper pid=12002)�[0m ERROR 07-18 11:37:25 worker_base.py:340] Error executing method initialize_cache. This might cause deadlock in distributed execution.

I understand this is an enormous model, but their docs on Hugging Face say "If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required." and I do have 8 80GB GPU's.

@daoxian
Copy link

daoxian commented Jul 22, 2024

Has anybody had luck running The 236b Deepseek V2 on VLLM yet? I was able to get the Lite Instruct model to run, but I can't get the full instruct model to run, even on 8 H100's. I get told there's no available memory for cache blocks. Been using these params:

--model deepseek-ai/DeepSeek-Coder-V2-Instruct --trust-remote-code --max-seq-len-to-capture 64000 --max-model-len 64000 --device cuda --gpu-memory-utilization 0.95 --tensor-parallel-size 8 --distributed-executor-backend ray --enforce-eager

Example error from logs:

[rank0]: File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]: engine = cls(

    �[36m(RayWorkerWrapper pid=12002)�[0m ERROR 07-18 11:37:25 worker_base.py:340] ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
[rank0]: return engine_class(*args, **kwargs)
ERROR 07-18 11:37:25 worker_base.py:340] raise ValueError("No available memory for the cache blocks. "
[rank0]: self._run_workers("initialize_cache",
ERROR 07-18 11:37:25 worker_base.py:340] ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
�[36m(RayWorkerWrapper pid=12002)�[0m ERROR 07-18 11:37:25 worker_base.py:340] raise ValueError("No available memory for the cache blocks. "
ERROR 07-18 11:37:25 worker_base.py:340] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 367, in raise_if_cache_size_invalid
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 375, in _initialize_kv_caches
[rank0]: self._initialize_kv_caches()
�[36m(RayWorkerWrapper pid=12002)�[0m ERROR 07-18 11:37:25 worker_base.py:340] Error executing method initialize_cache. This might cause deadlock in distributed execution.

I understand this is an enormous model, but their docs on Hugging Face say "If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required." and I do have 8 80GB GPU's.

--max-model-len too large, shrink it to less than 18000 together with all related args and retry

@mphilippnv
Copy link

mphilippnv commented Jul 22, 2024

Thanks, @daoxian. That seems to have gotten me farther with a new error about kv cache.
ValueError: The model's max seq len (16000) is larger than the maximum number of tokens that can be stored in KV cache (2368). Try increasing gpu_memory_utilizationor decreasingmax_model_len when initializing the engine.

I don't see any options in vllm to specify KV cache size, but maybe I'm just missing something.

UPDATE: I was able to get it to work with a low context of 4096. However, I really need the large context capabilities of this model. Would be good to know what needs to be done to use something like 64k or 128k context with this huge model. I'm assuming I just need more VRAM, but I even tried 16 H100's split across two nodes and it still doesn't work. I'm guessing because pipeline parallelism isn't supported for Deepseek yet.

xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
Co-authored-by: Philipp Moritz <pcmoritz@gmail.com>
@gabrielgrant
Copy link

@mphilippnv were you ever able to get past that 4k context limit?

Anyone have a better sense of what changes would need to be implemented to make that possible?

@mphilippnv
Copy link

@gabrielgrant It's definitely a memory issue. After conversing with my hardware people more, I found out our system only supports pipeline parallelism using MPI. Supposedly ray backend doesn't work in our system. Otherwise, with very large models liek this, you basically need multi-node deployment. For example, 2 nodes with 8 GPU's each. Then you would flag --tensor-parallel-size 8 --pipeline-parallel-size 2. This would work, I believe.

Additionally, I was able to get it running at about 32k context using --quantization fp8. Neural magic has also published an fp8-specific model: https://huggingface.co/neuralmagic/DeepSeek-Coder-V2-Instruct-FP8

These models are able to run on my 8 GPU setup and run pretty fast. Regardless, pipeline parallelism is still needed, I think, to get the max context out of it.

@KylinMountain
Copy link

@mphilippnv is it still not able to run on 8*h100 with 128k context?can you share your start command thanks.

@mphilippnv
Copy link

@KylinMountain I'm running the vllm openai docker container v0.5.4. I'm passing these engine args:

--model deepseek-ai/DeepSeek-Coder-V2-Instruct --trust-remote-code --max-seq-len-to-capture 64000 --max-model-len 64000 --device cuda --gpu-memory-utilization 0.95 --tensor-parallel-size 8 --distributed-executor-backend ray --enforce-eager

That runs out of memory saying "there's not enough memory for cache blocks". I've been able to get it to run with these settings:

--model deepseek-ai/DeepSeek-Coder-V2-Instruct --trust-remote-code --max-model-len 30000 --device cuda --tensor-parallel-size 8 --disable-log-stats --quantization fp8 --gpu-memory-utilization 0.95 --block-size 32

The fp8 quantization helps. But notice the context is still 30k. I can't even get 64k running, let alone 120, unfortunately.

@gabrielgrant
Copy link

gabrielgrant commented Aug 13, 2024

Ah, cool hadn't seen that neuralmagic FP8 version. Very interesting that they claim it has better HumanEval+ performance than the original (bottom of overview): "It achieves an average score of 88.98 on the HumanEval+ benchmark, whereas the unquantized model achieves 87.63."

Curious if you've had a chance to try any of the more aggressive quantizations by [bartowski[(https://huggingface.co/bartowski/DeepSeek-Coder-V2-Instruct-GGUF), LoneStriker or legraphista?

@mphilippnv
Copy link

@gabrielgrant I have not had a chance to try the more aggressive ones. I don't think vllm supports GGUF yet, even though I know there is an open issue being worked on for it.

@gabrielgrant
Copy link

@mphilippnv AFAIU it just landed a few days ago! #5191

@Jeffwan
Copy link
Contributor

Jeffwan commented Aug 17, 2024

@mphilippnv A quick question on the parallelism setting

Otherwise, with very large models like this, you basically need multi-node deployment. For example, 2 nodes with 8 GPU's each. Then you would flag --tensor-parallel-size 8 --pipeline-parallel-size 2.

Does TP+PP still work for MoE model like deepseek v2? If so, we can definitely use multi-host inference to support higher context window size without quantization, right?

@mphilippnv
Copy link

@Jeffwan I'm not sure. I haven't had a chance to really dive into getting our multi-node pipeline parallelism worrking. But yeah, if we can use multi-node, then I don't see why I wouldn't be able to get full context size across 16 80gb GPU's.

@zhyncs
Copy link
Contributor

zhyncs commented Aug 19, 2024

SGLang https://github.com/sgl-project/sglang/ now supports DeepSeek V2 MLA. It should be the fastest among all current open-source implementations. Give it a try! If you have any issues with usage, feel free to provide feedback.

# install
git clone https://github.com/sgl-project/sglang.git
cd sglang
pip install --upgrade pip
pip install -e "python[all]"
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/

# server
python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V2 --port 30000 --trust-remote-code --disable-radix-cache --enable-mla --tp=8

@zhyncs zhyncs mentioned this pull request Aug 19, 2024
@KylinMountain
Copy link

KylinMountain commented Aug 20, 2024

SGLang https://github.com/sgl-project/sglang/ now supports DeepSeek V2 MLA. It should be the fastest among all current open-source implementations. Give it a try! If you have any issues with usage, feel free to provide feedback.

# install
git clone https://github.com/sgl-project/sglang.git
cd sglang
pip install --upgrade pip
pip install -e "python[all]"
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/

# server
python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V2 --port 30000 --trust-remote-code --disable-radix-cache --enable-mla --tp=8

@zhyncs Thank you very much. Will give it a try, but want to know why this needs disable radix cache? I will run the deep-seek 232B on 8xH100.

@zhyncs
Copy link
Contributor

zhyncs commented Aug 20, 2024

why this needs disable radix cache?

@KylinMountain You can enable it. It doesn't matter.

@halexan
Copy link

halexan commented Aug 28, 2024

Any update for MLA?

@mphilippnv
Copy link

Ok, so I finally got my helm chart setup so I can run pipeline parallelism on the large model. I have ray setup on my pods and was able to serve 405b at full context. So, I went to try Deepseek 2.5 full and ran into this exception. Looks like maybe a Ray-specific exception and not VLLM related but posting here anyways:

(RayWorkerWrapper pid=1415)           ^^^^^^^^^^^^^^^^^^^^^ [repeated 6x across cluster]
(RayWorkerWrapper pid=1415) Traceback (most recent call last): [repeated 6x across cluster]
(RayWorkerWrapper pid=1415)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [repeated 6x across cluster]

  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 138, in from_engine_args
    self._init_workers_ray(placement_group)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'transformers_modules.deepseek-ai.DeepSeek-V2'
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    self._target(*self._args, **self._kwargs)
                  ^^^^^^^^^^^^^^^^
    self._init_workers_ray(placement_group)
    self._run_workers("init_worker", all_kwargs=init_worker_all_kwargs)
ray.exceptions.RaySystemError: System error: No module named 'transformers_modules.deepseek-ai.DeepSeek-V2'
traceback: Traceback (most recent call last):
          ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 47, in __init__
ray.exceptions.RayTaskError(RaySystemError): ray::RayWorkerWrapper.execute_method() (pid=1571, ip=10.60.19.190, actor_id=aca490896fa5568d0d16bc9701000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7f6dfa4be210>)
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/ray_gpu_executor.py", line 424, in _run_workers
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
    self._run_workers("init_worker", all_kwargs=init_worker_all_kwargs)
                  ^^^^^^^^^^^^^^^^
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 138, in from_engine_args
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 78, in __init__
           ^^^^^^^^^^^^^^^^^^^
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 871, in get_objects
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
    self.model_executor = executor_class(
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 325, in __init__
  File "/usr/local/lib/python3.12/dist-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)

Here are my vllm args:

--model deepseek-ai/DeepSeek-V2.5 --trust-remote-code --max-model-len 120000 --device cuda --tensor-parallel-size 8 --pipeline-parallel-size 2 --distributed-executor-backend ray --disable-log-stats --gpu-memory-utilization 0.95 --block-size 32 --num-scheduler-steps 10 --enable-chunked-prefill false
``

@youkaichao
Copy link
Member

@mphilippnv can you try to see if #6751 helps?

@mphilipp622
Copy link

@youkaichao this looks exactly like the issue. I guess I will wait for the merge. Hopefully it makes it to next release. Thanks!

@youkaichao
Copy link
Member

can you try it first, and report the benefit in #6751 ? this can help us to be confident to merge it.

@mphilipp622
Copy link

@youkaichao sure. Will take me a day or so. Need to update my docker file to install that branch and use it. Will report back on that issue you linked.

Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Co-authored-by: Philipp Moritz <pcmoritz@gmail.com>
Signed-off-by: Alvant <alvasian@yandex.ru>
@SeveredAsif
Copy link

ERROR 05-08 20:22:08 worker_base.py:145] ValueError: Model architectures ['DeepseekV2ForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LlavaForConditionalGeneration', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'XverseForCausalLM']

I am facing the same problem, what's the solution?

@youkaichao
Copy link
Member

@SeveredAsif upgrade your vllm version

@maxcccc
Copy link

maxcccc commented Nov 26, 2024

Is there anybody can help how to solve this issue?

加载模型时报如下错误:

Cache shape torch.Size([163840, 64]) [repeated 6x across cluster] INFO 05-14 22:41:26 model_runner.py:166] Loading model weights took 56.1087 GB /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint64_array’: /tmp/tmpw9q1ie7x/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ /tmp/tmpw9q1ie7x/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint32_array’: /tmp/tmpw9q1ie7x/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. python-BaseException Traceback (most recent call last): File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 146, in execute_method raise e File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method return executor(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks self.model_runner.profile_run() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run self.execute_model(seqs, kv_caches) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model hidden_states = model_executable(**execute_model_kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward hidden_states = self.model(input_ids, positions, kv_caches, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward hidden_states, residual = layer(positions, hidden_states, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward hidden_states = self.mlp(hidden_states) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward final_hidden_states = fused_moe(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe return fused_experts(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts invoke_fused_moe_kernel(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel fused_moe_kernel[grid]( File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run device = driver.get_current_device() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr self._initialize_obj() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj self._obj = self._init_fn() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver return CudaDriver() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init self.utils = CudaUtils() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init so = _build("cuda_utils", src_path, tmpdir) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ret = subprocess.check_call(cc_cmd) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmps4n0c8gr/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmps4n0c8gr', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmps4n0c8gr/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=66371) INFO 05-14 22:41:25 model_runner.py:166] Loading model weights took 56.1087 GB [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpezsumgls/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpezsumgls', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpezsumgls/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66276) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp4yg1ha_1/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmp4yg1ha_1', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp4yg1ha_1/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. [repeated 5x across cluster] (RayWorkerWrapper pid=66276) /tmp/tmp4yg1ha_1/main.c: In function ‘list_to_cuuint32_array’: [repeated 10x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [repeated 12x across cluster] (RayWorkerWrapper pid=66371) for (Py_ssize_t i = 0; i < len; i++) { [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ^ [repeated 12x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code [repeated 6x across cluster] Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1443, in _kill_process_type self._kill_process_impl( File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1499, in _kill_process_impl process.wait(timeout_seconds) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1189, in wait return self._wait(timeout=timeout) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1927, in _wait time.sleep(delay) KeyboardInterrupt [rank0]:[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Process finished with exit code 1

加载模型时报如下错误:

Cache shape torch.Size([163840, 64]) [repeated 6x across cluster] INFO 05-14 22:41:26 model_runner.py:166] Loading model weights took 56.1087 GB /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint64_array’: /tmp/tmpw9q1ie7x/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ /tmp/tmpw9q1ie7x/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint32_array’: /tmp/tmpw9q1ie7x/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. python-BaseException Traceback (most recent call last): File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 146, in execute_method raise e File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method return executor(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks self.model_runner.profile_run() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run self.execute_model(seqs, kv_caches) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model hidden_states = model_executable(**execute_model_kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward hidden_states = self.model(input_ids, positions, kv_caches, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward hidden_states, residual = layer(positions, hidden_states, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward hidden_states = self.mlp(hidden_states) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward final_hidden_states = fused_moe(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe return fused_experts(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts invoke_fused_moe_kernel(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel fused_moe_kernel[grid]( File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run device = driver.get_current_device() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr self._initialize_obj() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj self._obj = self._init_fn() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver return CudaDriver() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init self.utils = CudaUtils() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init so = _build("cuda_utils", src_path, tmpdir) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ret = subprocess.check_call(cc_cmd) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmps4n0c8gr/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmps4n0c8gr', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmps4n0c8gr/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=66371) INFO 05-14 22:41:25 model_runner.py:166] Loading model weights took 56.1087 GB [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpezsumgls/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpezsumgls', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpezsumgls/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66276) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp4yg1ha_1/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmp4yg1ha_1', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp4yg1ha_1/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. [repeated 5x across cluster] (RayWorkerWrapper pid=66276) /tmp/tmp4yg1ha_1/main.c: In function ‘list_to_cuuint32_array’: [repeated 10x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [repeated 12x across cluster] (RayWorkerWrapper pid=66371) for (Py_ssize_t i = 0; i < len; i++) { [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ^ [repeated 12x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code [repeated 6x across cluster] Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1443, in _kill_process_type self._kill_process_impl( File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1499, in _kill_process_impl process.wait(timeout_seconds) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1189, in wait return self._wait(timeout=timeout) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1927, in _wait time.sleep(delay) KeyboardInterrupt [rank0]:[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Process finished with exit code 1

@maxcccc
Copy link

maxcccc commented Nov 27, 2024

Is there anybody can help how to solve this issue?

加载模型时报如下错误:
Cache shape torch.Size([163840, 64]) [repeated 6x across cluster] INFO 05-14 22:41:26 model_runner.py:166] Loading model weights took 56.1087 GB /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint64_array’: /tmp/tmpw9q1ie7x/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ /tmp/tmpw9q1ie7x/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint32_array’: /tmp/tmpw9q1ie7x/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. python-BaseException Traceback (most recent call last): File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 146, in execute_method raise e File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method return executor(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks self.model_runner.profile_run() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run self.execute_model(seqs, kv_caches) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model hidden_states = model_executable(**execute_model_kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward hidden_states = self.model(input_ids, positions, kv_caches, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward hidden_states, residual = layer(positions, hidden_states, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward hidden_states = self.mlp(hidden_states) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward final_hidden_states = fused_moe(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe return fused_experts(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts invoke_fused_moe_kernel(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel fused_moe_kernel[grid]( File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run device = driver.get_current_device() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr self._initialize_obj() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj self._obj = self._init_fn() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver return CudaDriver() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init self.utils = CudaUtils() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init so = _build("cuda_utils", src_path, tmpdir) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ret = subprocess.check_call(cc_cmd) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmps4n0c8gr/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmps4n0c8gr', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmps4n0c8gr/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=66371) INFO 05-14 22:41:25 model_runner.py:166] Loading model weights took 56.1087 GB [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpezsumgls/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpezsumgls', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpezsumgls/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66276) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp4yg1ha_1/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmp4yg1ha_1', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp4yg1ha_1/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. [repeated 5x across cluster] (RayWorkerWrapper pid=66276) /tmp/tmp4yg1ha_1/main.c: In function ‘list_to_cuuint32_array’: [repeated 10x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [repeated 12x across cluster] (RayWorkerWrapper pid=66371) for (Py_ssize_t i = 0; i < len; i++) { [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ^ [repeated 12x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code [repeated 6x across cluster] Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1443, in _kill_process_type self._kill_process_impl( File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1499, in _kill_process_impl process.wait(timeout_seconds) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1189, in wait return self._wait(timeout=timeout) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1927, in _wait time.sleep(delay) KeyboardInterrupt [rank0]:[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
Process finished with exit code 1

加载模型时报如下错误:
Cache shape torch.Size([163840, 64]) [repeated 6x across cluster] INFO 05-14 22:41:26 model_runner.py:166] Loading model weights took 56.1087 GB /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint64_array’: /tmp/tmpw9q1ie7x/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ /tmp/tmpw9q1ie7x/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code /tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint32_array’: /tmp/tmpw9q1ie7x/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. python-BaseException Traceback (most recent call last): File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 146, in execute_method raise e File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method return executor(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks self.model_runner.profile_run() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run self.execute_model(seqs, kv_caches) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model hidden_states = model_executable(**execute_model_kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward hidden_states = self.model(input_ids, positions, kv_caches, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward hidden_states, residual = layer(positions, hidden_states, File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward hidden_states = self.mlp(hidden_states) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward final_hidden_states = fused_moe(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe return fused_experts(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts invoke_fused_moe_kernel(hidden_states, File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel fused_moe_kernel[grid]( File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run device = driver.get_current_device() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr self._initialize_obj() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj self._obj = self._init_fn() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver return CudaDriver() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init self.utils = CudaUtils() File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init so = _build("cuda_utils", src_path, tmpdir) File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build ret = subprocess.check_call(cc_cmd) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) (RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmps4n0c8gr/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmps4n0c8gr', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmps4n0c8gr/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=66371) INFO 05-14 22:41:25 model_runner.py:166] Loading model weights took 56.1087 GB [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpezsumgls/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpezsumgls', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpezsumgls/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode (RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) { (RayWorkerWrapper pid=65639) ^ (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint64_array’: (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint32_array’: (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) [repeated 18x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward [repeated 24x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call [repeated 6x across cluster] (RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) [repeated 6x across cluster] (RayWorkerWrapper pid=66276) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp4yg1ha_1/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmp4yg1ha_1', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp4yg1ha_1/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. [repeated 5x across cluster] (RayWorkerWrapper pid=66276) /tmp/tmp4yg1ha_1/main.c: In function ‘list_to_cuuint32_array’: [repeated 10x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [repeated 12x across cluster] (RayWorkerWrapper pid=66371) for (Py_ssize_t i = 0; i < len; i++) { [repeated 12x across cluster] (RayWorkerWrapper pid=66371) ^ [repeated 12x across cluster] (RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code [repeated 6x across cluster] Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1443, in _kill_process_type self._kill_process_impl( File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1499, in _kill_process_impl process.wait(timeout_seconds) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1189, in wait return self._wait(timeout=timeout) File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1927, in _wait time.sleep(delay) KeyboardInterrupt [rank0]:[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
Process finished with exit code 1

This issue has been solved by upgrading gcc version from 4.8.5 to devtoolset-7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new model Requests to new models
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DeepSeekCoderV2