[doc] Doc and example to run via config (#11)#22
Conversation
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
Summary of ChangesHello @arekay-nv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the project's documentation by introducing a new guide for benchmarking Hugging Face models. It provides clear, step-by-step instructions for setting up various inference servers and demonstrates how to leverage configuration files for efficient and reproducible benchmarking workflows, complete with new example templates. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds valuable documentation and example configurations for running benchmarks. The documentation is generally clear, but I've found a few typos and a command that would fail due to a typo. The new YAML configuration files are great examples, but they contain some inconsistencies and incorrect values for benchmark durations that could be misleading. I've provided specific suggestions to address these points and improve the quality of the documentation and examples.
src/inference_endpoint/config/templates/offline_llama3_8b_cnn.yaml
Outdated
Show resolved
Hide resolved
src/inference_endpoint/config/templates/online_llama2_70b_cnn.yaml
Outdated
Show resolved
Hide resolved
src/inference_endpoint/config/templates/online_llama2_70b_cnn.yaml
Outdated
Show resolved
Hide resolved
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Adds example doc on how to run for HF models via command line using config files as well as pre-defined templates.
Resolve #11
Type of change
Related issues
Testing
Checklist