You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when using the module-based interface, PL leaves the aggregation to TM since many metrics are in fact non-trivial to aggregate properly. TM however, is designed to also work independently of PL. So it only updates the states when you tell it to and computes results based on its internal states. It does not know about the dataloader concept at all.
What PL does there internally, is that it just caches the metric object to log and for multiple dataloaders it would still cache the same object (since you don’t have different objects per loader). The metric’s internal state however would be global for all of the loaders since it is the same
What PL does there internally, is that it just caches the metric object to log and for multiple dataloaders it would still cache the same object (since you don’t have different objects per loader). The metric’s internal state however would be global for all of the loaders since it is the same
how are you planning to make it work with multiple dataloaders? since the states are reset on epoch end and epoch end is triggered only once and when all the dataloaders are processed. Can you share more details?
As discussed on slack with @justusschock, it would be nice to make explicit the behavior of torchmetrics when used with multiple dataloaders.
From @justusschock:
cc @Borda @rohitgr7 @carmocca @edward-io @ananthsub @kamil-kaczmarek @Raalsky @Blaizzy
The text was updated successfully, but these errors were encountered: