Skip to content

[Feat] Q3VL: Exclude calibration dataset from testing#271

Open
Victor49152 wants to merge 3 commits intomlcommons:mainfrom
Victor49152:feat/q3vl_calibration_dataset
Open

[Feat] Q3VL: Exclude calibration dataset from testing#271
Victor49152 wants to merge 3 commits intomlcommons:mainfrom
Victor49152:feat/q3vl_calibration_dataset

Conversation

@Victor49152
Copy link
Copy Markdown
Collaborator

@Victor49152 Victor49152 commented Apr 7, 2026

What does this PR do?

Exclude Calibration dataset from perf/acc testing as required for v6.1 round.

Type of change

  • New feature

Related issues

Checklist

  • Code follows project style
  • Pre-commit hooks pass

@Victor49152 Victor49152 requested a review from a team April 7, 2026 21:48
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 7, 2026

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a list of calibration sample indices to be excluded during the Shopify product catalogue dataset generation. The review feedback suggests using a set for the index list to optimize lookup performance and recommends using unique identifiers instead of absolute indices to ensure robustness across different dataset splits.

Comment on lines +173 to +174
if i in CALIBRATION_SAMPLE_INDEX:
continue
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Filtering by absolute indices is fragile because it depends on the specific dataset split and ordering. If generate is called with a different split (e.g., ['test'] instead of the default ['train', 'test']), the index i will no longer correspond to the same samples, and the calibration set may not be correctly excluded. Consider using a unique identifier for filtering if available, or adding a check to ensure the exclusion only applies to the expected split.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dataset is fixed for this task and predefined dataset will be pulled in the same way always, these splits are not really configurable from any configs. So this should be OK for now.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who is the caller of ShopifyProductCatalogue.generate()?

Looking at the code alone, split is a parameter of ShopifyProductCatalogue.generate() which means it could change if the caller chooses to pass in an argument for split specifically. Taking this into consideration, a design that might make more sense is to have calibration_sample_indices also as an parameter of ShopifyProductCatalogue.generate(), and its default value would be CALIBRATION_SAMPLE_INDEX (or DEFAULT_CALIBRATION_SAMPLE_INDEX). When defining the v6.1 round benchmark settings, you could then put into the readme or the yaml that both split and calibration_sample_indices should not be specified to something other than None or the default values.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The funny part is that at least in the current setup, when you call a predefined dataset, the individual kwargs of this subclass is not configurable by the user. That means if the user don't hack the source code, those kwargs for generate() is the default value all the time. That's why I didn't bother.

You idea is absolutely a better design in general though.

@Victor49152 Victor49152 force-pushed the feat/q3vl_calibration_dataset branch from fef312e to 7026abb Compare April 7, 2026 22:01
@Victor49152 Victor49152 self-assigned this Apr 7, 2026
@Victor49152 Victor49152 added the type: feature New feature or capability label Apr 7, 2026
@arekay-nv
Copy link
Copy Markdown
Collaborator

@Victor49152 can you add some context on why this is needed? Is the calibration dataset determined by the working group? We might want to rethink the design as this could be useful for other datasets as well.

@nvzhihanj
Copy link
Copy Markdown
Collaborator

@Victor49152 can you add some context on why this is needed? Is the calibration dataset determined by the working group? We might want to rethink the design as this could be useful for other datasets as well.

Usually the convention is that we don't need to exclude the calibration dataset from the inference dataset, we simply use ~10% of it to generate quantization dataset

@Victor49152
Copy link
Copy Markdown
Collaborator Author

@Victor49152 can you add some context on why this is needed? Is the calibration dataset determined by the working group? We might want to rethink the design as this could be useful for other datasets as well.

Based on our experience the model accuracy is not very sensitive on the choice of calibration samples, besides it's only 20/48289 which is very small portion.

However, we received an email from Anton Lokhmotov two weeks ago that states it's a convention to set aside the calibration dataset from validation dataset for fairness. And it's better to follow the same convention for VLM, I'm trying to address this ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type: feature New feature or capability

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants