Skip to content

Adds template for custom path saving results #755

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
May 21, 2025
Merged

Adds template for custom path saving results #755

merged 10 commits into from
May 21, 2025

Conversation

NathanHB
Copy link
Member

No description provided.

@NathanHB NathanHB linked an issue May 20, 2025 that may be closed by this pull request
@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@NathanHB NathanHB requested a review from Copilot May 20, 2025 15:28
@NathanHB NathanHB self-assigned this May 20, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds a results_path_template argument across all CLI entrypoints and the evaluation tracker to allow custom result save paths.

  • Introduced results_path_template in each main_*.py function and forwarded it to EvaluationTracker.
  • Extended LightEvalLoggingArgs and EvaluationTracker to accept and store the template.
  • Implemented conditional logic in EvaluationTracker.save_results to format and use the template.

Reviewed Changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/lighteval/main_vllm.py Added results_path_template CLI option and passed to tracker
src/lighteval/main_sglang.py Added results_path_template CLI option and passed to tracker
src/lighteval/main_nanotron.py Propagated results_path_template from config to tracker
src/lighteval/main_endpoint.py Added results_path_template to inference_endpoint and tgi
src/lighteval/main_custom.py Added results_path_template CLI option and passed to tracker
src/lighteval/main_accelerate.py Added results_path_template CLI option and passed to tracker
src/lighteval/logging/evaluation_tracker.py Extended tracker signature, stored template, added custom save logic
src/lighteval/config/lighteval_config.py Extended LightEvalLoggingArgs to include results_path_template
Comments suppressed due to low confidence (2)

src/lighteval/main_custom.py:77

  • [nitpick] The constant HELP_PANNEL_NAME_2 appears misspelled; it should match HELP_PANEL_NAME_2 used elsewhere.
rich_help_panel=HELP_PANNEL_NAME_2,

src/lighteval/logging/evaluation_tracker.py:266

  • [nitpick] The new custom template logic in save_results isn’t covered by existing tests. Consider adding unit tests to verify that results_path_template is applied correctly and that paths are created as expected.
def save_results(self, date_id: str, results_dict: dict):

NathanHB and others added 4 commits May 20, 2025 15:32
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@NathanHB NathanHB added the feature/enhancement New feature/request label May 20, 2025
@NathanHB NathanHB requested a review from clefourrier May 20, 2025 16:10
Comment on lines 269 to 274
if len(org_model_split) < 2:
org = ""
model = org_model_split[0]
else:
org = org_model_split[0]
model = org_model_split[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we save the org/model name in a more efficient way than this?

@NathanHB NathanHB requested a review from Copilot May 21, 2025 08:31
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for using a custom template to determine where evaluation results are saved. The changes include adding a new parameter "results_path_template" across multiple main modules and updating the EvaluationTracker to honor this template for saving results; the associated tests and documentation have been updated accordingly.

  • Added a new test for the custom results template.
  • Extended CLI options in several main modules to accept "results_path_template".
  • Updated EvaluationTracker logic and documentation to reflect the new functionality.

Reviewed Changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tests/logging/test_evaluation_tracker.py Added a new test to verify saving results using a custom path template.
src/lighteval/main_vllm.py, main_sglang.py, main_nanotron.py, main_endpoint.py, main_custom.py, main_accelerate.py Extended CLI parameters to accept the new results_path_template option.
src/lighteval/logging/evaluation_tracker.py Modified save_results to support custom path templates, with fallback to the default.
src/lighteval/config/lighteval_config.py Added results_path_template to the logging configuration.
docs/source/saving-and-reading-results.mdx Updated documentation to describe the new custom path option.
Comments suppressed due to low confidence (1)

src/lighteval/main_custom.py:71

  • There is a typo in 'HELP_PANNEL_NAME_2'; consider renaming it to 'HELP_PANEL_NAME_2' for consistency with other modules.
str, Option(help="Output directory for evaluation results.", rich_help_panel=HELP_PANNEL_NAME_2)

If you want results to be saved in a custom path, you can set the `results-path-template` option.
This allows you to set a string template for the path. The template need to contain the following
variables: `output_dir`, `model_name`, `org`. For example
`{output_dir}/{org}_{model_name}`. The template will be used to create the path for the results file.
Copy link
Preview

Copilot AI May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation example uses '{model_name}' while the code expects '{model}'; consider updating the docs to match the code's variable naming.

Copilot uses AI. Check for mistakes.

@NathanHB NathanHB merged commit b6816a8 into main May 21, 2025
5 checks passed
hynky1999 pushed a commit that referenced this pull request May 22, 2025
## Pull Request Overview

This PR adds support for using a custom template to determine where evaluation results are saved. The changes include adding a new parameter "results_path_template" across multiple main modules and updating the EvaluationTracker to honor this template for saving results; the associated tests and documentation have been updated accordingly.
- Added a new test for the custom results template.
- Extended CLI options in several main modules to accept "results_path_template".
- Updated EvaluationTracker logic and documentation to reflect the new functionality.


---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature/enhancement New feature/request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FT] Custom details and results saving path
3 participants