Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minor: update flashinfer nightly #2490

Merged
merged 1 commit into from
Dec 16, 2024
Merged

minor: update flashinfer nightly #2490

merged 1 commit into from
Dec 16, 2024

Conversation

zhyncs
Copy link
Member

@zhyncs zhyncs commented Dec 16, 2024

Motivation

Modifications

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

@@ -10,7 +10,7 @@ pip install --upgrade pip
pip install -e "python[all]" --find-links https://flashinfer.ai/whl/cu121/torch2.4/flashinfer/

# Force reinstall flashinfer
pip install flashinfer -i ${FLASHINFER_REPO} --force-reinstall
pip install flashinfer --find-links ${FLASHINFER_REPO} --force-reinstall --no-deps
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two scenarios: one is a stable release, and the other is a nightly release. The first scenario uses cache because the script defaults to installing the stable version, making it very fast. For the second scenario, it depends on whether it's the first installation or not. It only triggers when users manually select a nightly release during PR testing, and if it's not the first time, it also uses the nightly cache, which is also very fast.

@zhyncs zhyncs merged commit 7154b4b into main Dec 16, 2024
3 of 15 checks passed
@zhyncs zhyncs deleted the zhyncs/minor branch December 16, 2024 15:02
@zhyncs
Copy link
Member Author

zhyncs commented Dec 16, 2024

https://github.com/flashinfer-ai/flashinfer/tree/zhyncs/main
https://github.com/flashinfer-ai/flashinfer-nightly/releases/tag/0.1.6%2B51236c9

2024-12-16 14:55:59,006 - INFO - flashinfer.jit: Loading JIT ops: batch_decode_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_128_posenc_0_use_swa_False_use_logits_cap_False
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
[rank0]: Traceback (most recent call last):
[rank0]:   File "/actions-runner/_work/sglang/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 207, in __init__
[rank0]:     self.capture()
[rank0]:   File "/actions-runner/_work/sglang/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line [26](https://github.com/sgl-project/sglang/actions/runs/12355218604/job/34478328251#step:4:27)8, in capture
[rank0]:     ) = self.capture_one_batch_size(bs, forward)
[rank0]:   File "/actions-runner/_work/sglang/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 297, in capture_one_batch_size
[rank0]:     self.model_runner.attn_backend.init_forward_metadata_capture_cuda_graph(
[rank0]:   File "/actions-runner/_work/sglang/sglang/python/sglang/srt/layers/attention/flashinfer_backend.py", line 194, in init_forward_metadata_capture_cuda_graph
[rank0]:     self.indices_updater_decode.update(
[rank0]:   File "/actions-runner/_work/sglang/sglang/python/sglang/srt/layers/attention/flashinfer_backend.py", line 378, in update_single_wrapper
[rank0]:     self.call_begin_forward(
[rank0]:   File "/actions-runner/_work/sglang/sglang/python/sglang/srt/layers/attention/flashinfer_backend.py", line 478, in call_begin_forward
[rank0]:     wrapper.begin_forward(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/flashinfer/decode.py", line 788, in plan
[rank0]:     self._cached_module = get_batch_decode_module(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/flashinfer/decode.py", line 148, in get_batch_decode_module
[rank0]:     mod = gen_batch_decode_module(*args)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/flashinfer/jit/attention.py", line 173, in gen_batch_decode_module
[rank0]:     return load_cuda_ops(uri, source_paths)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/flashinfer/jit/core.py", line 110, in load_cuda_ops
[rank0]:     module = torch_cpp_ext.load(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1[31](https://github.com/sgl-project/sglang/actions/runs/12355218604/job/34478328251#step:4:32)4, in load
[rank0]:     return _jit_compile(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1721, in _jit_compile
[rank0]:     _write_ninja_file_and_build_library(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1803, in _write_ninja_file_and_build_library
[rank0]:     verify_ninja_availability()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1852, in verify_ninja_availability
[rank0]:     raise RuntimeError("Ninja is required to load C++ extensions")
[rank0]: RuntimeError: Ninja is required to load C++ extensions

[rank0]: During handling of the above exception, another exception occurred:

[rank0]: Traceback (most recent call last):
[rank0]:   File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/usr/lib/python3.10/runpy.py", line 86, in _run_code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant