You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This option added in #2198 to create equivalence with sklearn also supports passing through a nan if I am not mistaken, based on the implementation of torchmetrics.utilities.compute._safe_divide.
I wanted to clarify that nan can indeed be passed through, ask if any code changed are required to support this officially, and suggest this could be added to the docs which state currently zero_division [float] – Should be 0 or 1., e.g. here
The text was updated successfully, but these errors were encountered:
Hi @robmarkcole, thanks for raising this issue and sorry for the slow reply from my side.
So currently nan is not a supported value, only 0 and 1 are supported. I can try to look into what it will take to support nan as a value. To my understanding the nan option means in sklearn that values will be ignored when calculating the average. I can try to look into what it will take to implement this into torchmetrics
I'm using v1.4.0post0 and can definitely pass NaN with zero_division to the metrics affected by #2198 and they work as expected (I think) when the value of average is either micro or none but return NaN in the case of macro (bug), so I'm computing it manually from the classwise results. Other metrics such as accuracy and specifity also accept zero_division because of inheritance, but complain when it is set to NaN.
I think that the ideal situation would be for absent and ignored classes to be assigned a NaN score or dropped, but this may not be that straightforward to implement because one class might be absent in one sample in a given batch but not in another, so we probably can't rely on batchwise class supports and have to work sample by sample. You can get StatScores to output samplewise NaNs for both absent and ignored classes, but then reducing this to a batchwise metric is a bit too much for me right now.
This option added in #2198 to create equivalence with sklearn also supports passing through a nan if I am not mistaken, based on the implementation of
torchmetrics.utilities.compute._safe_divide
.I wanted to clarify that nan can indeed be passed through, ask if any code changed are required to support this officially, and suggest this could be added to the docs which state currently
zero_division [float] – Should be 0 or 1.
, e.g. hereThe text was updated successfully, but these errors were encountered: