Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix replacement option in loss and gradnorm downsampling #529

Merged
merged 1 commit into from
Jun 19, 2024

Conversation

MaxiBoether
Copy link
Contributor

There was an unused replacement option which defaulted to True, which caused multiple keys to be chosen, which caused the storage to crash since it cannot deal with multiple keys in its current implementation. However, it should not have happened in the first place.

@MaxiBoether MaxiBoether requested a review from XianzheMa June 19, 2024 20:00
Copy link

codecov bot commented Jun 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.43%. Comparing base (c52a3d6) to head (5f82d0d).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #529      +/-   ##
==========================================
- Coverage   82.43%   82.43%   -0.01%     
==========================================
  Files         214      214              
  Lines       10039    10038       -1     
==========================================
- Hits         8276     8275       -1     
  Misses       1763     1763              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -50,8 +50,6 @@ def __init__(
assert "downsampling_ratio" in params_from_selector
self.downsampling_ratio = params_from_selector["downsampling_ratio"]

self.replacement = params_from_selector.get("replacement", True)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I have noticed this issue since a month ago then forgot to correct it :( otherwise everything has gone smoother

@XianzheMa
Copy link
Collaborator

let's just merge this I don't think any problem will be caused

@XianzheMa XianzheMa merged commit e6541d3 into main Jun 19, 2024
20 checks passed
@XianzheMa XianzheMa deleted the fix/MaxiBoether/norepllossgradnorm branch June 19, 2024 20:26
Copy link

Line Coverage: -% ( % to main)
Branch Coverage: -% ( % to main)

MaxiBoether added a commit that referenced this pull request Jun 19, 2024
When debugging #529 I realized this is a silent problem (we might access
memory of a vector that is not allocated), so we should just explicitly
fail.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants