Skip to content

[Misc] Clean up uesless code for LLM initialize #1373

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 25, 2025

Conversation

wangxiyuan
Copy link
Collaborator

@wangxiyuan wangxiyuan commented Jun 23, 2025

This PR aims to clean up the useless code for LLM setup. It helps to make the code more clear.

  1. remove useless self.xxx property
  2. change set_random_seed to seed_everything
  3. remove set_custom_all_reduce, it's only used for cuda

This is just a code clean. no change for any code logic.

Copy link

codecov bot commented Jun 23, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 27.25%. Comparing base (c30ddb8) to head (4e0967a).
Report is 22 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1373      +/-   ##
==========================================
- Coverage   27.39%   27.25%   -0.15%     
==========================================
  Files          56       56              
  Lines        6191     6220      +29     
==========================================
- Hits         1696     1695       -1     
- Misses       4495     4525      +30     
Flag Coverage Δ
unittests 27.25% <ø> (-0.15%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wangxiyuan wangxiyuan force-pushed the clean_up_init branch 3 times, most recently from 6bd5c62 to e446e63 Compare June 24, 2025 01:42
Comment on lines 1790 to 1793
if self.drafter:
logger.info("Loading drafter model...")
if self.use_aux_hidden_state_outputs:
self.drafter.load_model(self.model)
else:
self.drafter.load_model()
if self.use_aux_hidden_state_outputs:
self.model.set_aux_hidden_state_layers(
self.model.get_eagle3_aux_hidden_state_layers())
if self.use_aux_hidden_state_outputs:
self.model.set_aux_hidden_state_layers(
self.model.get_eagle3_aux_hidden_state_layers())
Copy link
Collaborator

@Yikun Yikun Jun 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

            if self.drafter:
                logger.info("Loading drafter model...")
                if self.use_aux_hidden_state_outputs:
                    self.drafter.load_model(self.model)
                    self.model.set_aux_hidden_state_layers(
                                            self.model.get_eagle3_aux_hidden_state_layers())
                else:
                    self.drafter.load_model()

also cc @yuancaoyaoHW

@@ -1851,7 +1808,7 @@ def load_model(self) -> None:
def _get_torchair_lazy_compiled_model(self, batch_size: int):
if batch_size < 0 or batch_size > self.max_num_reqs:
raise ValueError(
f"Bad graph batch size:{batch_size}! max_num_reqs:{self.max_num_reqs}"
f"Bad graph batch size:{batch_size}! max_num_seqs:{self.max_num_reqs}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
f"Bad graph batch size:{batch_size}! max_num_seqs:{self.max_num_reqs}"
f"Bad graph batch size:{batch_size}! max_num_reqs:{self.max_num_reqs}"

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Copy link
Collaborator

@Yikun Yikun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a refactor, LGTM.

@wangxiyuan wangxiyuan merged commit ca884ef into vllm-project:main Jun 25, 2025
24 checks passed
zkryakgul added a commit to zkryakgul/vllm-ascend that referenced this pull request Jun 25, 2025
[Misc] Clean up uesless code for LLM initialize (vllm-project#1373)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants