Skip to content

Auto GPU detection + Updated run args generation for 5.0 #433

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 21 commits into
base: dev
Choose a base branch
from

Conversation

anandhu-eng
Copy link
Contributor

No description provided.

@anandhu-eng anandhu-eng requested a review from a team as a code owner May 20, 2025 07:10
Copy link
Contributor

github-actions bot commented May 20, 2025

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@@ -518,12 +519,22 @@ def preprocess(i):
if dla_inference_streams:
run_config += f" --dla_inference_streams={dla_inference_streams}"

gpu_batch_size = env.get('MLC_MLPERF_NVIDIA_HARNESS_GPU_BATCH_SIZE')
gpu_batch_size = state.get('batch_size', env.get(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guard this for v5.0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in commit 796c363.

For 5.0, the correct format is --dla/gpu_batch_size=model:batch_size. Since users could provide batch size manually, I have placed MLC_MLPERF_NVIDIA_HARNESS_GPU_BATCH_SIZE as alternative source to state.

@anandhu-eng anandhu-eng marked this pull request as draft May 20, 2025 11:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants