Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Support for SQ and RQ in Panoptic Quality #2380

Closed
ChristophReich1996 opened this issue Feb 13, 2024 · 5 comments · Fixed by #2381
Closed

Add Support for SQ and RQ in Panoptic Quality #2380

ChristophReich1996 opened this issue Feb 13, 2024 · 5 comments · Fixed by #2381
Labels
enhancement New feature or request

Comments

@ChristophReich1996
Copy link
Contributor

🚀 Feature

First of all thanks for implementing the Panoptic Quality (PQ) metric. However, the current PQ implementation only computes the PQ. The Segmentation Quality (SQ) and Recognition Quality (RQ) are not computed. It would be amazing to have an option to also compute these metrics within the PQ implementation.

Motivation

Almost all papers (e.g., U2Seg) report the PQ alongside SQ and RQ so extending the PQ implementation to also compute SQ and RQ would be amazing.

Alternatives

We could also implement both SQ and RQ as separate metrics, however, since I'm not aware of any work reporting SQ or RQ w/o PQ this would be somewhat inconvenient. This would also lead to an additional compute overhead.

@ChristophReich1996 ChristophReich1996 added the enhancement New feature or request label Feb 13, 2024
Copy link

Hi! thanks for your contribution!, great first issue!

@SkafteNicki
Copy link
Member

Hi @ChristophReich1996, thanks for raising this issue.
Do you have a reference for how the the SQ and RQ is defined?

@ChristophReich1996
Copy link
Contributor Author

Hi @SkafteNicki, we could just use the already computed IoU, TP, FN, and FP to also compute both SQ and RQ. We can just follow the equation from the original paper:

Screenshot 2024-02-14 at 11 13 25

We would just need to change this function:

def _panoptic_quality_compute(
iou_sum: Tensor,
true_positives: Tensor,
false_positives: Tensor,
false_negatives: Tensor,
) -> Tensor:
"""Compute the final panoptic quality from interim values.
Args:
iou_sum: the iou sum from the update step
true_positives: the TP value from the update step
false_positives: the FP value from the update step
false_negatives: the FN value from the update step
Returns:
Panoptic quality as a tensor containing a single scalar.
"""
# per category calculation
denominator = (true_positives + 0.5 * false_positives + 0.5 * false_negatives).double()
panoptic_quality = torch.where(denominator > 0.0, iou_sum / denominator, 0.0)
# Reduce across categories. TODO: is it useful to have the option of returning per class metrics?
return torch.mean(panoptic_quality[denominator > 0])

Additionally, it was already discussed somewhere, but I guess it would be beneficial to compute PQ, SQ, and RQ on a per-class level as well as the global average.

@SkafteNicki
Copy link
Member

@ChristophReich1996 thanks for the info. Yeah that should be pretty easy to add/modify in the current codebase. Only thing to consider is how we should go about backwards compatibility. I agree that it does not make sense to separate out the individual scores into different metrics.
Also returning the score per-class level could most likely also be something that is easy to add to the metric while we are refactoring it.

@ChristophReich1996
Copy link
Contributor Author

@SkafteNicki Yeah backward compatibility could be somewhat cumbersome. But I guess having some flag could get us backward compatibility. Let me quickly draft a pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants