Skip to content

Conversation

KWang1998
Copy link
Collaborator

Description

update code to support multi-threaded loading for llama4

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure:

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have made or will make corresponding changes to any relevant documentation.

@KWang1998 KWang1998 requested a review from jrplatin August 12, 2025 01:28
Copy link

Description

Start with a short description of what the PR does and how this is a change from
the past.

The rest of the description includes relevant details and context, examples:

  • why is this change being made,
  • the problem being solved and any relevant context,
  • why this is a good solution,
  • some information about the specific implementation,
  • shortcomings of the solution and possible future improvements.

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure:

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have made or will make corresponding changes to any relevant documentation.

@jrplatin
Copy link
Collaborator

Can you include the time reduction / testing you performed (e.g. running MMLU / CI)?

@KWang1998
Copy link
Collaborator Author

Can you include the time reduction / testing you performed (e.g. running MMLU / CI)?

I tested locally with NEW_MODEL_DESIGN=True TPU_BACKEND_TYPE=jax python examples/offline_inference.py --task=generate --model=meta-llama/Llama-4-Scout-17B-16E-Instruct --max-model-len=1024 --tensor-parallel-size 8 --max-num-batched-tokens 1024 --max-num-seqs=1 --hf-config=meta-llama/Llama-4-Scout-17B-16E-Instruct --hf_overrides '{"architectures": ["Llama4ForCausalLM"]}'.
I found when using single thread, the time is 66.87 seconds.
When using multi-threads, the time is 84.07 seconds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants