Skip to content

[doc] Doc and example to run via config (#11)#22

Merged
arekay-nv merged 4 commits intomainfrom
arekay/add_example_docs
Nov 12, 2025
Merged

[doc] Doc and example to run via config (#11)#22
arekay-nv merged 4 commits intomainfrom
arekay/add_example_docs

Conversation

@arekay-nv
Copy link
Copy Markdown
Collaborator

@arekay-nv arekay-nv commented Nov 10, 2025

Adds example doc on how to run for HF models via command line using config files as well as pre-defined templates.

Resolve #11

Type of change

  • Bug fix
  • New feature
  • Documentation update
  • Refactor/cleanup

Related issues

Testing

  • Tests added/updated
  • All tests pass locally
  • Manual testing completed

Checklist

  • Code follows project style
  • Pre-commit hooks pass
  • Documentation updated (if needed)

Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
@arekay-nv arekay-nv requested a review from a team as a code owner November 10, 2025 21:37
@github-actions
Copy link
Copy Markdown

github-actions bot commented Nov 10, 2025

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@github-actions github-actions bot requested a review from nvzhihanj November 10, 2025 21:37
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @arekay-nv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's documentation by introducing a new guide for benchmarking Hugging Face models. It provides clear, step-by-step instructions for setting up various inference servers and demonstrates how to leverage configuration files for efficient and reproducible benchmarking workflows, complete with new example templates.

Highlights

  • New Benchmarking Documentation: A new README.md has been added under examples/02_ServerBenchmarking to provide comprehensive instructions on benchmarking Hugging Face models using inference endpoints.
  • Server Launch Instructions: The documentation includes detailed steps for launching vLLM and SgLang inference servers using Docker, as well as an example for vLLM with Enroot.
  • Config File Benchmarking: Guidance is now provided on how to perform benchmarking using configuration files, including instructions for preparing datasets like CNN/DailyMail.
  • New Configuration Templates: Two new YAML configuration templates have been introduced: one for offline benchmarking of Llama-3.1-8B-Instruct and another for online benchmarking of Llama-2-70b-chat-hf, both utilizing the CNN/DailyMail dataset.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds valuable documentation and example configurations for running benchmarks. The documentation is generally clear, but I've found a few typos and a command that would fail due to a typo. The new YAML configuration files are great examples, but they contain some inconsistencies and incorrect values for benchmark durations that could be misleading. I've provided specific suggestions to address these points and improve the quality of the documentation and examples.

@nvzhihanj nvzhihanj changed the title [doc] Doc and example to run via config [doc] Doc and example to run via config (#11) Nov 12, 2025
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
@arekay-nv arekay-nv merged commit 9942926 into main Nov 12, 2025
4 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Nov 12, 2025
@arekay-nv arekay-nv deleted the arekay/add_example_docs branch November 12, 2025 23:51
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Documentation: Add example of how to ramp up vLLM/SGLang endpoints for benchmarking

2 participants