Skip to content

fix decode for scale/ offset list #4802

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jan 15, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,8 @@ Bug fixes
- Add ``missing_dims`` parameter to transpose (:issue:`4647`, :pull:`4767`). By `Daniel Mesejo <https://github.com/mesejo>`_.
- Resolve intervals before appending other metadata to labels when plotting (:issue:`4322`, :pull:`4794`).
By `Justus Magin <https://github.com/keewis>`_.
- Fix regression when decoding a variable with a ``scale_factor`` and ``add_offset`` given
as a list of length one (:issue:`4631`) by `Mathias Hauser <https://github.com/mathause>`_.
- Expand user directory paths (e.g. ``~/``) in :py:func:`open_mfdataset` and
:py:meth:`Dataset.to_zarr` (:issue:`4783`, :pull:`4795`).
By `Julien Seguinot <https://github.com/juseg>`_.
Expand Down
4 changes: 2 additions & 2 deletions xarray/coding/variables.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,9 +270,9 @@ def decode(self, variable, name=None):
add_offset = pop_to(attrs, encoding, "add_offset", name=name)
dtype = _choose_float_dtype(data.dtype, "add_offset" in attrs)
if np.ndim(scale_factor) > 0:
scale_factor = scale_factor.item()
scale_factor = np.asarray(scale_factor).item()
if np.ndim(add_offset) > 0:
add_offset = add_offset.item()
add_offset = np.asarray(add_offset).item()
transform = partial(
_scale_offset_decoding,
scale_factor=scale_factor,
Expand Down
14 changes: 13 additions & 1 deletion xarray/tests/test_coding.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from xarray.coding import variables
from xarray.conventions import decode_cf_variable, encode_cf_variable

from . import assert_equal, assert_identical, requires_dask
from . import assert_allclose, assert_equal, assert_identical, requires_dask

with suppress(ImportError):
import dask.array as da
Expand Down Expand Up @@ -105,3 +105,15 @@ def test_scaling_converts_to_float32(dtype):
roundtripped = coder.decode(encoded)
assert_identical(original, roundtripped)
assert roundtripped.dtype == np.float32


@pytest.mark.parametrize("scale_factor", (10, [10]))
@pytest.mark.parametrize("add_offset", (0.1, [0.1]))
def test_scaling_offset_as_list(scale_factor, add_offset):
# test for #4631
encoding = dict(scale_factor=scale_factor, add_offset=add_offset)
original = xr.Variable(("x",), np.arange(10.0), encoding=encoding)
coder = variables.CFScaleOffsetCoder()
encoded = coder.encode(original)
roundtripped = coder.decode(encoded)
assert_allclose(original, roundtripped)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I'm missing something, this test just checks that a round trip is successful. It doesn't check that encoding with scale_factor or add_offset specified as a list (a) is equivalent to encoding with a scalar, or (b) is correct. An implementation that simply ignored lists for scale_factor or add_offset (other than retaining the encoding dictionary) would pass this test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point!