Skip to content

Conversation

pmeier
Copy link
Collaborator

@pmeier pmeier commented Oct 12, 2022

Follow-up to and partial revert of #6741.

Although it is ok to have "random" values in the inputs definition, we don't want actual random values inside our tests. The former communicates that the actual values don't really matter, while the latter makes sure we don't have flaky tests and can reproduce CI behavior locally.

Thus, this PR reverts the seed changes of #6741 and adds a fixture that sets the RNG seed before every test. If #6750 is resolved, we can probably also put back the "random" sizes for the gaussian_blur_* kernels.

@datumbox
Copy link
Contributor

datumbox commented Oct 12, 2022

I don't mind reverting my changes.

The reason why I didn't follow this approach is because the population of the KERNEL_INFOS dictionary happens long before the tests actually run and thus before the seed is set. As a result the random calls to determine the sizes yield different results every time. Consequently the actual data tensors will be different from run-to-run making the seed setting to have no effect. Am I missing something?

@pmeier
Copy link
Collaborator Author

pmeier commented Oct 12, 2022

As a result the random calls to determine the sizes yield different results every time.

True. And I need to look into how we can fix that. This is why I put back your changes to the sample inputs definition in 7d25068.

Consequently the actual data tensors will be different from run-to-run making the seed setting to have no effect.

Yes, they will be different in shape. This is not desired and I will look into it as explained above. However, your PR also does not fix this as the setting of the seed only happens inside the test and thus at the same time as the fixture of this PR does.

@datumbox
Copy link
Contributor

datumbox commented Oct 12, 2022

However, your PR also does not fix this as the setting of the seed only happens inside the test and thus at the same time as the fixture of this PR does.

My original PR circumvents the problem by not having random calls for the specific problematic kernel during the KERNEL_INFOS initialisation. So right now, on main branch, the random data are deterministic (I've checked).

I'm not sure I get how this PR fixes the issue as you say. Does fix_rng_seed() execute before KERNEL_INFOS get populated?

Edit: Nevermind, I just realised you don't revert all the changes. You keep the non-randomness change on the KERNEL_INFOS generation. LGTM.

@pmeier pmeier merged commit 7d36d26 into pytorch:main Oct 12, 2022
@pmeier pmeier deleted the seed-transform-tests branch October 12, 2022 10:03
facebook-github-bot pushed a commit that referenced this pull request Oct 17, 2022
Summary:
* Revert "Add seeds on Kernel Info and reduce randomness for Gaussian Blur (#6741)"

This reverts commit 6e72f2f.

* add fixture to fix the RNG seed

* re-add changes to gaussian_blur_* sample input shapes

Reviewed By: NicolasHug

Differential Revision: D40427476

fbshipit-source-id: b0aae476915f2d4e58146edf8e9339338bd78ad5

Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants