Skip to content

Conversation

@Mr-Neutr0n
Copy link

Summary

  • Fix ValueError: betas must be either both floats or both Tensors when using PyTorch 2.x+
  • Change 0** to 0.0** in beta calculations for both the generator and discriminator optimizers in setup_optimizers()
  • The expression 0**net_g_reg_ratio (and 0**net_d_reg_ratio) produces an integer 0, but PyTorch 2.x Adam requires betas to be floats. Using 0.0** ensures a float result.

Details

In gfpgan/models/gfpgan_model.py, the setup_optimizers() method computes optimizer betas as:

betas = (0**net_g_reg_ratio, 0.99**net_g_reg_ratio)

Since 0 is an int and net_g_reg_ratio is also an int (value 1), 0**1 evaluates to int(0). Meanwhile 0.99**1 evaluates to float(0.99). This produces a mixed-type tuple (int, float) which PyTorch 2.x rejects with:

ValueError: betas must be either both floats or both Tensors

The fix changes the literal from 0 to 0.0 so that the exponentiation always produces a float, for both the generator optimizer (line 164) and the discriminator optimizer (line 179).

Fixes #638

Test plan

  • Verify the change is minimal and correct: 0.0**x returns float for any numeric x
  • Training with PyTorch 2.x+ no longer raises ValueError at optimizer construction

Use 0.0 instead of 0 in beta calculations for both generator and
discriminator optimizers. The expression 0**ratio returns an integer,
which causes a ValueError in PyTorch 2.x+ where Adam requires betas
to be floats. Using 0.0**ratio ensures the result is always a float.

Fixes TencentARC#638
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ValueError: betas must be either both floats or both Tensors

1 participant