-
Notifications
You must be signed in to change notification settings - Fork 704
[ENH] Refactor _loss_mapping #1967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Converting this to draft PR. Looking for feedback |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks and Welcome to pytorch-forecasting
@ParamThakkar123!
I have few questions/suggestions:
-
Donot use
get_test_train_params
for model_pkg as the fixtures come fromget_base_test_params
-
I suspect somewhere
compatible_loss
list
is returned empty and this is the reason, some models are looking forget_test_train_params
(which should not happen for v1). -
Also, can you please update the description of the PR to add the following info:
- The EXACT changes you are doing to the files
- Why are you doing it
- Expected behavior of the test framework after these changes.
This would make reviewing easy for the reviewers.
If you have any problems/doubts, please feel free to ping us on discord. You can also join weekly community meet which happens every Friday at 13:00 UTC at the meetups channels on discord. |
Hello @phoeenniixx Thanks for the amazing feedback. And yeah there are some metrics for which no compatible losses are found. So should I skip them in the test loop? |
Or what kind of changes should I exactly be doing ? |
A fallback? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahead of a review - you are reformatting a larger number of files.
Could you kindly do that in a separate pull request, to avoid a large scope?
Yes. Will do that |
So, here we use
These were collected before, so it should be possible to collect them with the new changes, so, I think we should not skip them. Rather, please check if there are some bugs To check if all the losses are being collected correctly, please run tests from Ideally, both of these tests should pass and all the losses should be collected correctly. |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #1967 +/- ##
=======================================
Coverage ? 87.06%
=======================================
Files ? 158
Lines ? 9408
Branches ? 0
=======================================
Hits ? 8191
Misses ? 1217
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@phoeenniixx @fkiraly All changes implemented. |
pytorch_forecasting/metrics/_quantile_pkg/_quantile_loss_pkg.py
Outdated
Show resolved
Hide resolved
Thanks! |
Sure @phoeenniixx , noted. Making those changes in a minute |
Actually I meant |
Got it. Really sorry for that |
No worries, you are doing great! |
@phoeenniixx changes done. Can you please review again ? 😅 |
compatible_losses.extend(LOSSES_BY_PRED_AND_Y_TYPE[key]) | ||
|
||
return compatible_losses | ||
def check_loss_output_shape(pkg, y_pred, y_true): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: is this function used anywhere? If not, did you forget to replace something with it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, actually I added that function just for checking the output shapes, its not used anywhere and I think we can remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why did you add it? Or was it added by AI for no reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah It was actually added while I was using copilot but it added it for checking shapes of tensors but it wasn't used anywhere later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Looks almost done!
Change requests:
- use
all_objects
instead of manually populated registries for retrieval, see above - I think
LOSS_SPECIFIC_PARAMS
can now be entirely replaced by method calls - so I would suggest, look where it is imported and do the replacement. - can you explain if, and if yes where, the following methods are being called:
check_loss_output_shape
, and_get_test_dataloaders_from
, in metrics?
Design questions:
- is
clip_target
implied by an intrinsic property of the loss, e.g., is it implied by the range of the target? Or is it purely a test parameter? If it is a property of the loss, then we should think how to encode it by tags. If purely a test parameter, then we should move it. - can you elaborate how the data loaders are now being constructed? There are now private test methods in the metrics, as well as in the models. This feels a big complicated, and how do they interact?
The check_loss_output_shape is not being used anywhere just added for testing,removing it in the latest push and the _get_test_dataloaders_from is used in all the distribution, quantile and loss packages |
can you say where precisely the methods are used, please? |
Written here: |
@fkiraly After I replaced LOSS_SPECIFIC_PARAMS by loss_params there are errors from data_loader_kwargs but I made changes to those and ideally data_loader_kwargs should be passed to the dl_default_kwargs but they are not being passed. Is it because _get_test_dataloaders_from are differently accessed ? |
For distribution, point and quantile losses I am access _get_test_dataloaders_from from the base object while for other packages it's completely defined there as a class method. Does that require a refactor too @fkiraly ? |
@fkiraly should I keep |
I think we need to discuss this, I am a bit lost about the structure of the calls here. Do you have time to join the meeting about pytorch-forecasting on Sep 26, 13 UTC, on the sktime discord? |
How about 28th of September @fkiraly ? |
works for me, but all the ptf developers are in the meeting on the 26th - may I recommend to get in touch on discord for scheduling? |
Reference Issues/PRs
Fixes #1957
What does this implement/fix? Explain your changes.
This refactor moves legacy loss compatibility mappings and tensor shape checks to metric tags in the respective _pkg classes.
Test-specific parameters and dataloader configurations are now provided by a standardized _get_test_dataloaders_from method in each metric package.
All test logic for selecting compatible losses and checking output shapes is now tag-driven, improving maintainability and extensibility.
Legacy mapping code and if/else logic have been removed from the test suite.
What should a reviewer concentrate their feedback on?
Did you add any tests for the change?
Any other comments?
PR checklist
pre-commit install
.To run hooks independent of commit, execute
pre-commit run --all-files