You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In pytorch they use cdouble as the same as torch.complex128 (2 torch.float64 values). Are the names really supposed to be different from pytorch? It confused me a little when I saw that as I didn't know if cfloat64 was representing 2 32 bit values or 64 bits for each value. So I went to check on pytorch's documentation, which made it clear (2nd print).
Also, I don't know if intentional or not, I think this torch package is missing the torch.complex32 (chalf) implementation.
The text was updated successfully, but these errors were encountered:
Hi! I noticed from
dtype.R
thatcdouble
is the same ascfloat64
.In pytorch they use
cdouble
as the same astorch.complex128
(2torch.float64
values). Are the names really supposed to be different from pytorch? It confused me a little when I saw that as I didn't know ifcfloat64
was representing 2 32 bit values or 64 bits for each value. So I went to check on pytorch's documentation, which made it clear (2nd print).Also, I don't know if intentional or not, I think this torch package is missing the
torch.complex32
(chalf
) implementation.The text was updated successfully, but these errors were encountered: