You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some metrics take up a lot of GPU memory in certain situations (deterministic mode, lots of classes, caching), even though they don't spit out a list (the requirement for the current "compute_on_cpu" option).
Enabling the "compute_on_cpu" option for all metrics would make this library more usable even when you have a ton of metrics that you would like to compute.
Alternatives
The issue here is that there isn't any real alternative other than not using torchmetrics or not computing some metric that you want (or spending more money on compute).
The text was updated successfully, but these errors were encountered:
Hi @spott, thanks for raising this issue.
Could you clarify what metrics you are using that are taking up a large amount of memory?
I am not against this proposal, but wanted to understand better what metrics are causing these problems you have?
🚀 Feature
Some metrics take up a lot of GPU memory in certain situations (deterministic mode, lots of classes, caching), even though they don't spit out a list (the requirement for the current "compute_on_cpu" option).
Enabling the "compute_on_cpu" option for all metrics would make this library more usable even when you have a ton of metrics that you would like to compute.
Alternatives
The issue here is that there isn't any real alternative other than not using torchmetrics or not computing some metric that you want (or spending more money on compute).
The text was updated successfully, but these errors were encountered: