You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @stijnh I assigned this to you to check if this is a bug or a feature :)
In practice if you only want to use the TunablePrecision types to test the performance of various types of floats, and are not interested in measuring the loss of accuracy with the AccuracyObserver the verification does not work anymore.
Simple test, using the accuracy.py example in examples/cuda, just remove the observers and you get this error:
TypeError: Element 3 of the expected results list is not of the same dtype as the kernel output: float64 != float32.
If the observers are passed to tune_kernel, then everything works.
The text was updated successfully, but these errors were encountered:
Hi @stijnh I assigned this to you to check if this is a bug or a feature :)
In practice if you only want to use the
TunablePrecision
types to test the performance of various types of floats, and are not interested in measuring the loss of accuracy with theAccuracyObserver
the verification does not work anymore.Simple test, using the
accuracy.py
example inexamples/cuda
, just remove the observers and you get this error:If the observers are passed to
tune_kernel
, then everything works.The text was updated successfully, but these errors were encountered: