Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unpredictable class order when panoptic_quality(..., return_per_class=True) #2547

Closed
i-aki-y opened this issue May 19, 2024 · 0 comments · Fixed by #2548
Closed

Unpredictable class order when panoptic_quality(..., return_per_class=True) #2547

i-aki-y opened this issue May 19, 2024 · 0 comments · Fixed by #2548
Labels
bug / fix Something isn't working help wanted Extra attention is needed v1.4.x

Comments

@i-aki-y
Copy link
Contributor

i-aki-y commented May 19, 2024

🐛 Bug

PanopticQuality can return per class score when return_per_class=True argument is given.
But the order of the output classes is unpredictable.

To Reproduce

def pq_demo(cats):
    a = [cats[0], 0]
    b = [cats[1], 0]
    c = [cats[2], 0]

    targs = torch.tensor([[a, a, b, b, c, c]])
    preds = torch.tensor([[a, a, b, b, b, c]])
    score = panoptic_quality(preds, targs, things=[], stuffs=cats, return_per_class=True)
    print(score)

pq_demo([0, 2, 1])
pq_demo([0, 3, 2])
pq_demo([0, 10, 2])

# OUTPUT:
# tensor([[1.0000, 0.0000, 0.6667]], dtype=torch.float64)
# tensor([[1.0000, 0.0000, 0.6667]], dtype=torch.float64)
# tensor([[1.0000, 0.6667, 0.0000]], dtype=torch.float64)

Three examples should return identical scores but actually the order has been changed.

(Note that it is not sure if this can be reproduced in a different environment as described below)

Internally, the order of the classes are determined by the value of cat_id_to_continuous_id the values are defined here:

def _get_category_id_to_continuous_id(things: Set[int], stuffs: Set[int]) -> Dict[int, int]:
"""Convert original IDs to continuous IDs.
Args:
things: All unique IDs for things classes.
stuffs: All unique IDs for stuff classes.
Returns:
A mapping from the original category IDs to continuous IDs (i.e., 0, 1, 2, ...).
"""
# things metrics are stored with a continuous id in [0, len(things)[,
thing_id_to_continuous_id = {thing_id: idx for idx, thing_id in enumerate(things)}
# stuff metrics are stored with a continuous id in [len(things), len(things) + len(stuffs)[
stuff_id_to_continuous_id = {stuff_id: idx + len(things) for idx, stuff_id in enumerate(stuffs)}
cat_id_to_continuous_id = {}
cat_id_to_continuous_id.update(thing_id_to_continuous_id)
cat_id_to_continuous_id.update(stuff_id_to_continuous_id)
return cat_id_to_continuous_id

There, the things and stuffs are enumerated by enumerate(things), but these values are objects of Set class that is "unordered collection". So the order of the resulting lists and the cat_id_to_continuous_id also has unpredictable order.

Expected behavior

The output classes are ordered by things and stuffs, and numerically sorted within each.
Ex. with things=[4, 1], stuffs=[3, 2], the output classes are ordered by [1, 4, 2, 3].

I guessed the expected behavior from the implementation because I could not find a description of the order of output classes.

Environment

  • TorchMetrics version (and how you installed TM, e.g. conda, pip, build from source): build the master branch
  • Python & PyTorch Version (e.g., 1.0): '1.5.0dev'
  • Any other relevant information such as OS (e.g., Linux): Linux (Ubuntu 22.04)

Additional context

@i-aki-y i-aki-y added bug / fix Something isn't working help wanted Extra attention is needed labels May 19, 2024
@Borda Borda added the v1.4.x label May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed v1.4.x
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants