Skip to content

Commit

Permalink
More correct deprecation warning for lock argument (#5256)
Browse files Browse the repository at this point in the history
* Add scipy to the list of backends that support `lock`

* Add `lock` deprecation to zarr and pydap

* Add what's new entry

* Fix merge.

* Fix "option not passed" test

* Update doc/whats-new.rst

Co-authored-by: Maximilian Roos <5635139+max-sixty@users.noreply.github.com>
  • Loading branch information
alexamici and max-sixty authored May 4, 2021
1 parent 4aef8f9 commit f455e00
Show file tree
Hide file tree
Showing 4 changed files with 27 additions and 2 deletions.
5 changes: 5 additions & 0 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,11 @@ Deprecations
:py:func:`xarray.open_mfdataset` when `combine='by_coords'` is specified.
Fixes (:issue:`5230`), via (:pull:`5231`, :pull:`5255`).
By `Tom Nicholas <https://github.com/TomNicholas>`_.
- The `lock` keyword argument to :py:func:`open_dataset` and :py:func:`open_dataarray` is now
a backend specific option. It will give a warning if passed to a backend that doesn't support it
instead of being silently ignored. From the next version it will raise an error.
This is part of the refactor to support external backends (:issue:`5073`).
By `Tom Nicholas <https://github.com/TomNicholas>`_ and `Alessandro Amici <https://github.com/alexamici>`_.


Bug fixes
Expand Down
4 changes: 2 additions & 2 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -449,7 +449,7 @@ def open_dataset(
relevant when using dask or another form of parallelism. By default,
appropriate locks are chosen to safely read and write files with the
currently active dask scheduler. Supported by "netcdf4", "h5netcdf",
"pynio", "pseudonetcdf", "cfgrib".
"scipy", "pynio", "pseudonetcdf", "cfgrib".
See engine open function for kwargs accepted by each specific engine.
Expand Down Expand Up @@ -633,7 +633,7 @@ def open_dataarray(
relevant when using dask or another form of parallelism. By default,
appropriate locks are chosen to safely read and write files with the
currently active dask scheduler. Supported by "netcdf4", "h5netcdf",
"pynio", "pseudonetcdf", "cfgrib".
"scipy", "pynio", "pseudonetcdf", "cfgrib".
See engine open function for kwargs accepted by each specific engine.
Expand Down
11 changes: 11 additions & 0 deletions xarray/backends/pydap_.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import warnings

import numpy as np

from ..core import indexing
Expand Down Expand Up @@ -122,7 +124,16 @@ def open_dataset(
use_cftime=None,
decode_timedelta=None,
session=None,
lock=None,
):
# TODO remove after v0.19
if lock is not None:
warnings.warn(
"The kwarg 'lock' has been deprecated for this backend, and is now "
"ignored. In the future passing lock will raise an error.",
DeprecationWarning,
)

store = PydapDataStore.open(
filename_or_obj,
session=session,
Expand Down
9 changes: 9 additions & 0 deletions xarray/backends/zarr.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
import pathlib
import warnings
from distutils.version import LooseVersion

import numpy as np
Expand Down Expand Up @@ -721,7 +722,15 @@ def open_dataset(
consolidate_on_close=False,
chunk_store=None,
storage_options=None,
lock=None,
):
# TODO remove after v0.19
if lock is not None:
warnings.warn(
"The kwarg 'lock' has been deprecated for this backend, and is now "
"ignored. In the future passing lock will raise an error.",
DeprecationWarning,
)

filename_or_obj = _normalize_path(filename_or_obj)
store = ZarrStore.open_group(
Expand Down

0 comments on commit f455e00

Please sign in to comment.