Fix optimizer beta type error for PyTorch 2.x+ #640
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
ValueError: betas must be either both floats or both Tensorswhen using PyTorch 2.x+0**to0.0**in beta calculations for both the generator and discriminator optimizers insetup_optimizers()0**net_g_reg_ratio(and0**net_d_reg_ratio) produces an integer0, but PyTorch 2.x Adam requires betas to be floats. Using0.0**ensures a float result.Details
In
gfpgan/models/gfpgan_model.py, thesetup_optimizers()method computes optimizer betas as:Since
0is anintandnet_g_reg_ratiois also anint(value1),0**1evaluates toint(0). Meanwhile0.99**1evaluates tofloat(0.99). This produces a mixed-type tuple(int, float)which PyTorch 2.x rejects with:The fix changes the literal from
0to0.0so that the exponentiation always produces a float, for both the generator optimizer (line 164) and the discriminator optimizer (line 179).Fixes #638
Test plan
0.0**xreturnsfloatfor any numericxValueErrorat optimizer construction