-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to pass callable assertion failure message generator #5607
base: main
Are you sure you want to change the base?
Conversation
Thanks for this suggestion @naught101 - can you give an example use case for this within the xarray codebase? Or are you just making a general suggestion of something we might find useful? |
|
(if we do this we should make it kwarg-only) |
@TomNicholas My particular use case is that have datasets that are large enough that I can't see the full diff, so might miss major changes. I'm wanting to pass in something like I know I could set the tolerances higher, but the changes are not numerical errors, and I want to see them before updating the test data that they are comparing against. Entirely possible that there are better ways to do this, of course :) |
Whether or not we merge this, I would be a +1 on improving the output of the canonical function better, such that we can see differences easily. I would also imagine we can iterate on those without worrying about backward-compat, given this is generating output for people. As an example, printing the result of the exact statement above for large arrays, maybe also with dimension values, would be great! |
I think the more standard way to handle this is to add an argument for supplying an auxiliary |
@shoyer That would either not work, or be needlessly expensive, I think. The message generation might be expensive (e.g. if I want a sum or mean of the differences). With a call back it only happens if it is needed. With a pre-computed message it would be computed every time.. Correct me if I'm wrong. |
Unit Test Results 6 files 6 suites 53m 43s ⏱️ For more details on these failures, see this check. Results for commit fe39e36. |
@naught101 to what extent do you think your wants are idiosyncratic vs. better overall? To the extent this is for things that would make the function better for all, then one option is to implement them; e.g. add Possibly that reduces the opportunity to experiment though? People could probably monkey-patch the function in their own environments if they want. And I think we can also be fairly liberal about merging changes given the output is read by people. |
I'm mostly thinking of the precedent from the standard library's unittest library and numpy.testing. It does indeed add a bit of expense to add a custom message, but I guess it's generally acceptable for testing? If not, I would kind of expect to add support for callables in those libraries first. |
It is nice to be able to write custom assertion error messages on failure sometimes. This allows that with the array comparison assertions, by allowing a
fail_func(a, b)
callable to be passed in to each assertion function.Not tested yet, but I'm happy to add tests if this is something that would be appreciated.
pre-commit run --all-files
whats-new.rst
api.rst