Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update torchmetrics requirement from <1.2,>=0.7 to >=0.7,<1.3 in /_requirements #275

Merged

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Oct 1, 2023

Updates the requirements on torchmetrics to permit the latest version.

Release notes

Sourced from torchmetrics's releases.

Clustering metrics

Torchmetrics v1.2 is out now! The latest release includes 11 new metrics within a new subdomain: Clustering. In this blog post, we briefly explain what clustering is, why it’s a useful measure and newly added metrics that can be used with code samples.

Clustering - what is it?

Clustering is an unsupervised learning technique. The term unsupervised here refers to the fact that we do not have ground truth targets as we do in classification. The primary goal of clustering is to discover hidden patterns or structures within data without prior knowledge about the meaning or importance of particular features. Thus, clustering is a form of data exploration compared to supervised learning, where the goal is “just” to predict if a data point belongs to one class.

The key goal of clustering algorithms is to split data into clusters/sets where data points from the same cluster are more similar to each other than any other points from the remaining clusters. Some of the most common and widely used clustering algorithms are K-Means, Hierarchical clustering, and Gaussian Mixture Models (GMM).

An objective quality evaluation/measure is required regardless of the clustering algorithm or internal optimization criterion used. In general, we can divide all clustering metrics into two categories: extrinsic metrics and intrinsic metrics.

Extrinsic metrics

Extrinsic metrics are characterized by requirements of some ground truth labeling, even if used for an unsupervised method. This may seem counter-intuitive at first as we, by clustering definition, do not use such ground truth labeling. However, most clustering algorithms are still developed on datasets with labels available, so these metrics use this fact as an advantage.

Intrinsic metrics

In contrast, intrinsic metrics do not need any ground truth information. These metrics estimate inter-cluster consistency (cohesion of all points assigned to a single set) compared to other clusters (separation). This is often done by comparing the distance in the embedding space.

Update to Mean Average Precision

MeanAveragePrecision, the most widely used metric for object detection in computer vision, now supports two new arguments: average and backend.

  • The average argument controls averaging over multiple classes. By the core definition, the default way is macro averaging, where the metric is calculated for each class separately and then averaged together. This will continue to be the default in Torchmetrics, but now we also support the setting average="micro". Every object under this setting is essentially considered to be the same class, and the returned value is, therefore, calculated simultaneously over all objects.

  • The second argument - backend, is important, as it indicates what computational backend will be used for the internal computations. Since MeanAveragePrecision is not a simple metric to compute, and we value the correctness of our metric, we rely on some third-party library to do the internal computations. By default, we rely on users to have the official pycocotools installed, but with the new argument, we will also be supporting other backends.

[1.2.0] - 2023-09-22

Added

  • Added metric to cluster package:
    • MutualInformationScore (#2008)
    • RandScore (#2025)
    • NormalizedMutualInfoScore (#2029)
    • AdjustedRandScore (#2032)
    • CalinskiHarabaszScore (#2036)
    • DunnIndex (#2049)
    • HomogeneityScore (#2053)
    • CompletenessScore (#2053)
    • VMeasureScore (#2053)
    • FowlkesMallowsIndex (#2066)
    • AdjustedMutualInfoScore (#2058)
    • DaviesBouldinScore (#2071)
  • Added backend argument to MeanAveragePrecision (#2034)

Full Changelog: Lightning-AI/torchmetrics@v1.1.0...v1.2.0

... (truncated)

Changelog

Sourced from torchmetrics's changelog.

[1.2.0] - 2023-09-22

Added

  • Added metric to cluster package:
    • MutualInformationScore (#2008)
    • RandScore (#2025)
    • NormalizedMutualInfoScore (#2029)
    • AdjustedRandScore (#2032)
    • CalinskiHarabaszScore (#2036)
    • DunnIndex (#2049)
    • HomogeneityScore (#2053)
    • CompletenessScore (#2053)
    • VMeasureScore (#2053)
    • FowlkesMallowsIndex (#2066)
    • AdjustedMutualInfoScore (#2058)
    • DaviesBouldinScore (#2071)
  • Added backend argument to MeanAveragePrecision (#2034)

[1.1.2] - 2023-09-11

Fixed

  • Fixed tie breaking in ndcg metric (#2031)
  • Fixed bug in BootStrapper when very few samples were evaluated that could lead to crash (#2052)
  • Fixed bug when creating multiple plots that lead to not all plots being shown (#2060)
  • Fixed performance issues in RecallAtFixedPrecision for large batch sizes (#2042)
  • Fixed bug related to MetricCollection used with custom metrics have prefix/postfix attributes (#2070)

[1.1.1] - 2023-08-29

Added

  • Added average argument to MeanAveragePrecision (#2018)

Fixed

  • Fixed bug in PearsonCorrCoef is updated on single samples at a time (#2019)
  • Fixed support for pixel-wise MSE (#2017)
  • Fixed bug in MetricCollection when used with multiple metrics that return dicts with same keys (#2027)
  • Fixed bug in detection intersection metrics when class_metrics=True resulting in wrong values (#1924)
  • Fixed missing attributes higher_is_better, is_differentiable for some metrics (#2028)

[1.1.0] - 2023-08-22

Added

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Oct 1, 2023

Dependabot tried to add @Lightning-AI/core-lightning as a reviewer to this PR, but received the following error from GitHub:

POST https://api.github.com/repos/Lightning-AI/tutorials/pulls/275/requested_reviewers: 422 - Reviews may only be requested from collaborators. One or more of the teams you specified is not a collaborator of the Lightning-AI/tutorials repository. // See: https://docs.github.com/rest/pulls/review-requests#request-reviewers-for-a-pull-request

@codecov
Copy link

codecov bot commented Oct 1, 2023

Codecov Report

Merging #275 (332c4dd) into main (4c9a890) will not change coverage.
The diff coverage is n/a.

Additional details and impacted files
@@         Coverage Diff         @@
##           main   #275   +/-   ##
===================================
  Coverage    73%    73%           
===================================
  Files         2      2           
  Lines       382    382           
===================================
  Hits        280    280           
  Misses      102    102           

@dependabot dependabot bot force-pushed the dependabot-pip-_requirements-torchmetrics-gte-0.7-and-lt-1.3 branch from 923a4c8 to e240362 Compare October 1, 2023 10:01
@Borda Borda enabled auto-merge (squash) October 1, 2023 10:01
@Borda
Copy link
Member

Borda commented Oct 3, 2023

@dependabot rebase

Updates the requirements on [torchmetrics](https://github.com/Lightning-AI/torchmetrics) to permit the latest version.
- [Release notes](https://github.com/Lightning-AI/torchmetrics/releases)
- [Changelog](https://github.com/Lightning-AI/torchmetrics/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/torchmetrics@v0.7.0...v1.2.0)

---
updated-dependencies:
- dependency-name: torchmetrics
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot force-pushed the dependabot-pip-_requirements-torchmetrics-gte-0.7-and-lt-1.3 branch from e240362 to 332c4dd Compare October 3, 2023 22:59
@Borda Borda disabled auto-merge October 3, 2023 23:09
@Borda Borda merged commit 9ca6253 into main Oct 3, 2023
14 of 15 checks passed
@Borda Borda deleted the dependabot-pip-_requirements-torchmetrics-gte-0.7-and-lt-1.3 branch October 3, 2023 23:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant