Skip to content

Commit 329f914

Browse files
committed
fix failed CI on CUDA spawn issue
torch.cuda.is_available() is not working well with multiprocessing case so we switch to use is_cpu() to check the device Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
1 parent bd02fad commit 329f914

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

tests/conftest.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
from vllm.inputs import TextPrompt
1717
from vllm.logger import init_logger
1818
from vllm.sequence import MultiModalData, SampleLogprobs
19+
from vllm.utils import is_cpu
1920

2021
logger = init_logger(__name__)
2122

@@ -55,7 +56,7 @@ def cleanup():
5556
with contextlib.suppress(AssertionError):
5657
torch.distributed.destroy_process_group()
5758
gc.collect()
58-
if torch.cuda.is_available():
59+
if not is_cpu():
5960
torch.cuda.empty_cache()
6061

6162

@@ -144,7 +145,7 @@ def example_long_prompts() -> List[str]:
144145
class HfRunner:
145146

146147
def wrap_device(self, input: any):
147-
if torch.cuda.is_available():
148+
if not is_cpu():
148149
return input.cuda()
149150
else:
150151
return input.cpu()

0 commit comments

Comments
 (0)