Open
Conversation
Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>
Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>
Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>
Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
samet-akcay
reviewed
Oct 24, 2024
| _validate.has_at_least_one_normal_image(masks) | ||
|
|
||
| image_classes = images_classes_from_masks(masks) | ||
| anomaly_maps_normal_images = anomaly_maps[image_classes == 0] |
Contributor
There was a problem hiding this comment.
Suggested change
| anomaly_maps_normal_images = anomaly_maps[image_classes == 0] | |
| normal_anomaly_maps = anomaly_maps[image_classes == 0] |
|
|
||
|
|
||
| def _binary_search_threshold_at_fpr_target( | ||
| anomaly_maps_normals: torch.Tensor, |
Contributor
There was a problem hiding this comment.
Suggested change
| anomaly_maps_normals: torch.Tensor, | |
| normal_anomaly_maps: torch.Tensor, |
djdameln
suggested changes
Oct 25, 2024
Contributor
djdameln
left a comment
There was a problem hiding this comment.
Thanks, this is a nice optimization. Just out of curiosity, do you have some numbers on the amount of speed up is achieved by this change?
| fpr_target: float | torch.Tensor, | ||
| maximum_iterations: int = 300, | ||
| ) -> float: | ||
| """Binary search of threshold that achieves the given shared FPR level. |
Contributor
There was a problem hiding this comment.
It would be good to add a more detailed description to this docstring, explaining why we need to apply the binary search and how it is performed. This would be useful for future reference.
Contributor
|
Looks like one of the pimo notebook tests are failing |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📝 Description
without
numba, aupimo became annoyingly slow for full resolution test, so this idea show speed it up by removing unnecessayr computationthis parameter is what mostly makes it so inefficient (
num_thresholds = 300_000)https://github.com/openvinotoolkit/anomalib/blob/1465b05fd9ff5c20bfe6df661187e6866e04cec7/src/anomalib/metrics/pimo/functional.py#L118
it has to be so big to make sure that there will be enough points in the AUC integration within the integration range
the current implementation thresholds the anomaly score maps from their min to max value, which is the inefficient because we only need it in a much smaller range
strategy to improve it:
- use binary search to find the thresholds corresponding to the fpr integration bounds
- compute the binary classification curves within those bounds
- decrease the number of thresholds from
300_000to300✨ Changes
Select what type of change your PR is:
✅ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.