-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved CF decoding #6812
base: main
Are you sure you want to change the base?
Improved CF decoding #6812
Conversation
Note - I also have not run the "Running the performance test suite" code in https://xarray.pydata.org/en/stable/contributing.html - I assume changing from |
I'm reading more in https://github.com/pydata/xarray/blob/2a5686c6fe855502523e495e43bd381d14191c7b/xarray/coding/variables.py and I'm confused about some logic: xarray/xarray/coding/variables.py Lines 271 to 272 in 2a5686c
I think this is happening based on inspecting with the debugger. Furthermore, the fix I implemented in this Pull Request which returns if not has_offset:
return np.float64 should not run, but do run because of this issue. |
Line above this removes 'add_offset' from 'attrs' (if it exists), so '"add_offset" in attrs' should always be false. It was moved into 'encoding' so let's check for it there.
Modified _choose_float_dtype + Returns float32 if inputs are float16 or float32 + Returns float64 if inputs are int
xarray/coding/variables.py
Outdated
if not has_offset: | ||
return np.float32 | ||
return np.float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the code matches the comments. It would be clearer if written as
if has_offset:
return np.float64
else:
return np.float32
Without your edits, if there is an offset the condition does not trigger and we return np.float64 later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reviewing this pull request. FYI my original comment (later edited) said:
Also, before this is merged, I'd like to suggest a larger change, and possibly discuss architecture here a bit (if appropriate). Specifically, I'd like to change the _choose_float_dtype function, and the two calls to it, to pass in the dtype of scale_factor and add_offset, in addition to the data dtype. This function should then return the dtype of the highest precision of three.
Currently, _choose_float_dtype returns float32 if the data is float16 or float32, even if the scale_factor dtype is float64.
Based on your comment, I think my original intuition - that this function needs a large rewrite - is correct. I'll look into this and submit additional commits to this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this!
xarray/coding/variables.py
Outdated
@@ -269,7 +269,7 @@ def decode(self, variable, name=None): | |||
if "scale_factor" in attrs or "add_offset" in attrs: | |||
scale_factor = pop_to(attrs, encoding, "scale_factor", name=name) | |||
add_offset = pop_to(attrs, encoding, "add_offset", name=name) | |||
dtype = _choose_float_dtype(data.dtype, "add_offset" in attrs) | |||
dtype = _choose_float_dtype(data.dtype, "add_offset" in encoding) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect this fixed one issue, but the original issue still remains because we still aren't looking at the dtype of scale_factor
and add_offset
as recommended by the conventions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note - I think the conventions referred to above are: https://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/build/ch08.html or
If the scale_factor and add_offset attributes are of the same data type as the associated variable, the unpacked data is assumed to be of the same data type as the packed data. However, if the scale_factor and add_offset attributes are of a different data type from the variable (containing the packed data) then the unpacked data should match the type of these attributes, which must both be of type float or both be of type double. An additional restriction in this case is that the variable containing the packed data must be of type byte, short or int. It is not advised to unpack an int into a float as there is a potential precision loss.
https://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/build/ch08.html Split encoding and decoding for now.
if not has_offset: | ||
if has_offset: | ||
return np.float64 | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted to original algorithm as per suggestion from dcherian.
return np.float32 | ||
# For all other types and circumstances, we just use float64. | ||
# (safe because eg. complex numbers are not supported in NetCDF) | ||
return np.float64 | ||
|
||
|
||
def _choose_float_dtype_decoding(dtype, scale_factor, add_offset): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I chose to focus on decoding per the CF specification, so I split the function. Furthermore, decoding makes heavy use of np.find_common_type
to select the correct datatype for the final product.
@@ -224,7 +224,7 @@ def _scale_offset_decoding(data, scale_factor, add_offset, dtype): | |||
return data | |||
|
|||
|
|||
def _choose_float_dtype(dtype, has_offset): | |||
def _choose_float_dtype_encoding(dtype, has_offset): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Encoding per the CF spec is fairly specific. The packed variable is supposed to be type byte/short/int, not float. Most of the tests encode with a scale_factor
or add_offset
that require the packed data to be type float. Rather than trying to solve all this, I have just split the encode and decode dtype function.
if "add_offset" in encoding: | ||
data -= pop_to(encoding, attrs, "add_offset", name=name) | ||
if "scale_factor" in encoding: | ||
data /= pop_to(encoding, attrs, "scale_factor", name=name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data type may change when adding or scaling, hence changing from data /= ...
to data = data / ...
.
|
||
|
||
@pytest.mark.parametrize("scale_factor", (10, [10])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that scale_factor
or add_offset
can be an array type per the CF spec, so I changed this test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These kinds of things tend to happen though. Since we have tested for it, we should just keep it around.
Sorry for dropping this @mankoff How can we move forward here? |
Hi @dcherian - I dropped this because I went down a rabbit hole that seemed very very deep. Xarray has written 10s (100s?) of tests that touch this decoding function that make assumptions that I believe are incorrect after a careful reading of the CF spec. I believe the path forward will take some conversation before coding, so perhaps this should be moved to an issue rather than a pull request? A big decision is if the decode option strictly follows CF guidelines. If so, then a lot of tests need to be changed (for example, to follow the simple rule of Enforcing this would probably break Furthermore, the CF conventions are themselves not very clear, and possibly ambiguous. I started a conversation here: cf-convention/cf-conventions#374 on this, but that is also unresolved at the moment. The CF convention mentions |
A bit more detail about the existing tests that don't match the CF spec. Per the spec, xarray/xarray/tests/test_coding.py Lines 112 to 113 in 13c52b2
There is 1 test in In addition, the expected I am concerned that this is a significant change and I'm not sure what the process is for making this change. I would like to have some idea, even if not a guarantee, that it would be welcomed and accepted before doing all the work. I note that a recent other large PR to try to fix cf decoding has also stalled, and I'm not sure why (see #2751) |
I think our general position is to be flexible on what we can read because there are many slightly non-compliant files out there.
Some of these might just be for convenience and some might be checking that we are flexible in what we can read. This following test should be preserved so we can read those files (#4631):
Do we not enforce that
I think the way to move forward would be to figure out the smallest change that would fix (or even improve) #2304 and move on. We have a 30-minute bi-weekly meeting (#4001) that you're welcomed to attend and raise specific questions. The next one is Oct 26 at 9.30am Mountain Time |
We should figure out how to express some of this understanding as tests (some xfailed). That way it's easy to check when something gets fixed, and prevent regressions. |
The comments above this line state, "so we just use a float64" but then it returns
np.float32
. I assume the comments are correct. Changing this also fixes a bug I ran into.Note that currently,
_choose_float_dtype
returnsfloat32
if the data is float16 or float32, even if thescale_factor
dtype is float64.