-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
[CI] change spell checker from codespell to typos #18711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[CI] change spell checker from codespell to typos #18711
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
9e8461d
to
2fb888e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks reasonable to me. Can you remove the codespell
config from pyproject.toml
?
@mgoin can you verify that the changes to the kernels are OK? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert the changes to cudaDevAttrMaxSharedMemoryPerBlockOptin and the transformers test
I don't know if I like this. I think it will be annoying and error-prone if developers need to define functional exceptions like this during their work
csrc/cuda_utils_kernels.cu
Outdated
attribute = cudaDevAttrMaxSharedMemoryPerBlockOptin; | ||
attribute = cudaDevAttrMaxSharedMemoryPerBlockOption; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is valid. I think Optin is right here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems cudaDevAttrMaxSharedMemoryPerBlockOptin
is predefined in cuda sdk. Will do.
csrc/cutlass_extensions/common.hpp
Outdated
@@ -18,7 +18,7 @@ | |||
inline int get_cuda_max_shared_memory_per_block_opt_in(int const device) { | |||
int max_shared_mem_per_block_opt_in = 0; | |||
cudaDeviceGetAttribute(&max_shared_mem_per_block_opt_in, | |||
cudaDevAttrMaxSharedMemoryPerBlockOptin, device); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
csrc/moe/marlin_moe_wna16/ops.cu
Outdated
@@ -584,7 +584,7 @@ void marlin_mm(const void* A, const void* B, void* C, void* C_tmp, void* s, | |||
|
|||
int max_shared_mem = 0; | |||
cudaDeviceGetAttribute(&max_shared_mem, | |||
cudaDevAttrMaxSharedMemoryPerBlockOptin, dev); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
csrc/moe/moe_align_sum_kernels.cu
Outdated
@@ -302,7 +302,7 @@ void moe_align_block_size(torch::Tensor topk_ids, int64_t num_experts, | |||
int device_max_shared_mem; | |||
auto dev = topk_ids.get_device(); | |||
cudaDeviceGetAttribute(&device_max_shared_mem, | |||
cudaDevAttrMaxSharedMemoryPerBlockOptin, dev); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
2fb888e
to
9e2ee0b
Compare
9e2ee0b
to
d592a3e
Compare
@DarkLight1337 Done. |
Yes. this is true. Some trade-off needs to be done if we decide to apply this PR. |
d592a3e
to
ccd6843
Compare
This pull request has merge conflicts that must be resolved before it can be |
tests/lora/test_transforms_model.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you fix this? Maybe we can disable renaming files entirely
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mgoin file name spelling check has been disabled by setting check-filename
to false
in typos.yaml
file.
ccd6843
to
e87514b
Compare
e87514b
to
5ef5dab
Compare
Signed-off-by: Andy Xie <andy.xning@gmail.com>
5ef5dab
to
0b52b3e
Compare
It seems that most special escapes in spell checking are about cuda and nvml. Other changes in this PR are clean and clear enough. There are two options about this spell check change to the later developing overhead:
So, i prefer option 2. /cc @mgoin |
This pull request has merge conflicts that must be resolved before it can be |
Currently, codespell can not help in finding all possible typos. According to this comparison, it seems typos has more good performance and correctness. typos also supports both pre-commit and Github actions.