Skip to content

Commit

Permalink
link maintenance (pydata#5182)
Browse files Browse the repository at this point in the history
* remove private methods

* create a API page for DataArray.str

* fix a few more links

* remove the API page for DataArray.str again

* pin sphinx to a version lower than 4.0

this helps making the transition to sphinx>=4.0 (to be released soon) smoother

* use the correct role for curve_fit

* fix the link to normalize_chunks

* fix more links and move BackendEntrypoint to the advanced API [skip-ci]

* add a API page for set_close

* explicitly document DataArray.str [skip-ci]

* more docstring fixes [skip-ci]
  • Loading branch information
keewis authored Apr 19, 2021
1 parent e0358e5 commit 5b2257e
Show file tree
Hide file tree
Showing 10 changed files with 46 additions and 40 deletions.
2 changes: 1 addition & 1 deletion ci/requirements/doc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ dependencies:
- sphinx-book-theme >= 0.0.38
- sphinx-copybutton
- sphinx-panels
- sphinx>=3.3
- sphinx<4
- zarr>=2.4
- pip:
- sphinxext-rediraffe
Expand Down
7 changes: 3 additions & 4 deletions doc/api-hidden.rst
Original file line number Diff line number Diff line change
Expand Up @@ -826,10 +826,9 @@
backends.DummyFileManager.acquire_context
backends.DummyFileManager.close

backends.common.BackendArray
backends.common.BackendEntrypoint
backends.common.BackendEntrypoint.guess_can_open
backends.common.BackendEntrypoint.open_dataset
backends.BackendArray
backends.BackendEntrypoint.guess_can_open
backends.BackendEntrypoint.open_dataset

core.indexing.IndexingSupport
core.indexing.explicit_indexing_adapter
Expand Down
15 changes: 9 additions & 6 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -420,16 +420,16 @@ Computation
String manipulation
-------------------

.. autosummary::
:toctree: generated/
:template: autosummary/accessor.rst

DataArray.str

.. autosummary::
:toctree: generated/
:template: autosummary/accessor_method.rst

DataArray.str._apply
DataArray.str._padder
DataArray.str._partitioner
DataArray.str._re_compile
DataArray.str._splitter
DataArray.str._stringify
DataArray.str.capitalize
DataArray.str.casefold
DataArray.str.cat
Expand Down Expand Up @@ -896,6 +896,9 @@ Advanced API
as_variable
register_dataset_accessor
register_dataarray_accessor
Dataset.set_close
backends.BackendArray
backends.BackendEntrypoint

These backends provide a low-level interface for lazily loading data from
external file-formats or protocols, and can be manually invoked to create
Expand Down
10 changes: 5 additions & 5 deletions doc/internals/how-to-add-new-backend.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ How to add a new backend
Adding a new backend for read support to Xarray does not require
to integrate any code in Xarray; all you need to do is:

- Create a class that inherits from Xarray :py:class:`~xarray.backends.common.BackendEntrypoint`
- Create a class that inherits from Xarray :py:class:`~xarray.backends.BackendEntrypoint`
and implements the method ``open_dataset`` see :ref:`RST backend_entrypoint`

- Declare this class as an external plugin in your ``setup.py``, see :ref:`RST backend_registration`
Expand Down Expand Up @@ -161,8 +161,8 @@ guess_can_open
``guess_can_open`` is used to identify the proper engine to open your data
file automatically in case the engine is not specified explicitly. If you are
not interested in supporting this feature, you can skip this step since
:py:class:`~xarray.backends.common.BackendEntrypoint` already provides a
default :py:meth:`~xarray.backend.common.BackendEntrypoint.guess_can_open`
:py:class:`~xarray.backends.BackendEntrypoint` already provides a
default :py:meth:`~xarray.backends.BackendEntrypoint.guess_can_open`
that always returns ``False``.

Backend ``guess_can_open`` takes as input the ``filename_or_obj`` parameter of
Expand Down Expand Up @@ -299,7 +299,7 @@ Where:
- :py:class:`~xarray.core.indexing.LazilyIndexedArray` is a class
provided by Xarray that manages the lazy loading.
- ``MyBackendArray`` shall be implemented by the backend and shall inherit
from :py:class:`~xarray.backends.common.BackendArray`.
from :py:class:`~xarray.backends.BackendArray`.

BackendArray subclassing
^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -455,5 +455,5 @@ In the first case Xarray uses the chunks size specified in
``preferred_chunks``.
In the second case Xarray accommodates ideal chunk sizes, preserving if
possible the "preferred_chunks". The ideal chunk size is computed using
:py:func:`dask.core.normalize_chunks`, setting
:py:func:`dask.array.core.normalize_chunks`, setting
``previous_chunks = preferred_chunks``.
1 change: 1 addition & 0 deletions doc/internals/zarr-encoding-spec.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.. currentmodule:: xarray

.. _zarr_encoding:

Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,7 @@ Fitting arbitrary functions
===========================

Xarray objects also provide an interface for fitting more complex functions using
:py:meth:`scipy.optimize.curve_fit`. :py:meth:`~xarray.DataArray.curvefit` accepts
:py:func:`scipy.optimize.curve_fit`. :py:meth:`~xarray.DataArray.curvefit` accepts
user-defined functions and can fit along multiple coordinates.

For example, we can fit a relationship between two ``DataArray`` objects, maintaining
Expand Down
2 changes: 2 additions & 0 deletions doc/user-guide/weather-climate.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. currentmodule:: xarray

.. _weather-climate:

Weather and climate data
Expand Down
15 changes: 7 additions & 8 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ New Features
- Many of the arguments for the :py:attr:`DataArray.str` methods now support
providing an array-like input. In this case, the array provided to the
arguments is broadcast against the original array and applied elementwise.
- :py:attr:`DataArray.str` now supports `+`, `*`, and `%` operators. These
- :py:attr:`DataArray.str` now supports ``+``, ``*``, and ``%`` operators. These
behave the same as they do for :py:class:`str`, except that they follow
array broadcasting rules.
- A large number of new :py:attr:`DataArray.str` methods were implemented,
Expand Down Expand Up @@ -212,10 +212,10 @@ New Features
By `Justus Magin <https://github.com/keewis>`_.
- Allow installing from git archives (:pull:`4897`).
By `Justus Magin <https://github.com/keewis>`_.
- :py:class:`DataArrayCoarsen` and :py:class:`DatasetCoarsen` now implement a
``reduce`` method, enabling coarsening operations with custom reduction
functions (:issue:`3741`, :pull:`4939`). By `Spencer Clark
<https://github.com/spencerkclark>`_.
- :py:class:`~core.rolling.DataArrayCoarsen` and :py:class:`~core.rolling.DatasetCoarsen`
now implement a ``reduce`` method, enabling coarsening operations with custom
reduction functions (:issue:`3741`, :pull:`4939`).
By `Spencer Clark <https://github.com/spencerkclark>`_.
- Most rolling operations use significantly less memory. (:issue:`4325`).
By `Deepak Cherian <https://github.com/dcherian>`_.
- Add :py:meth:`Dataset.drop_isel` and :py:meth:`DataArray.drop_isel`
Expand All @@ -234,9 +234,8 @@ New Features

Bug fixes
~~~~~~~~~
- Use specific type checks in
:py:func:`~xarray.core.variable.as_compatible_data` instead of blanket
access to ``values`` attribute (:issue:`2097`)
- Use specific type checks in ``xarray.core.variable.as_compatible_data`` instead of
blanket access to ``values`` attribute (:issue:`2097`)
By `Yunus Sevinchan <https://github.com/blsqr>`_.
- :py:meth:`DataArray.resample` and :py:meth:`Dataset.resample` do not trigger
computations anymore if :py:meth:`Dataset.weighted` or
Expand Down
12 changes: 6 additions & 6 deletions xarray/core/dataarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -4418,7 +4418,7 @@ def curvefit(
Parameters
----------
coords : DataArray, str or sequence of DataArray, str
coords : hashable, DataArray, or sequence of DataArray or hashable
Independent coordinate(s) over which to perform the curve fitting. Must share
at least one dimension with the calling object. When fitting multi-dimensional
functions, supply `coords` as a sequence in the same order as arguments in
Expand All @@ -4429,27 +4429,27 @@ def curvefit(
array of length `len(x)`. `params` are the fittable parameters which are optimized
by scipy curve_fit. `x` can also be specified as a sequence containing multiple
coordinates, e.g. `f((x0, x1), *params)`.
reduce_dims : str or sequence of str
reduce_dims : hashable or sequence of hashable
Additional dimension(s) over which to aggregate while fitting. For example,
calling `ds.curvefit(coords='time', reduce_dims=['lat', 'lon'], ...)` will
aggregate all lat and lon points and fit the specified function along the
time dimension.
skipna : bool, optional
Whether to skip missing values when fitting. Default is True.
p0 : dictionary, optional
p0 : dict-like, optional
Optional dictionary of parameter names to initial guesses passed to the
`curve_fit` `p0` arg. If none or only some parameters are passed, the rest will
be assigned initial values following the default scipy behavior.
bounds : dictionary, optional
bounds : dict-like, optional
Optional dictionary of parameter names to bounding values passed to the
`curve_fit` `bounds` arg. If none or only some parameters are passed, the rest
will be unbounded following the default scipy behavior.
param_names : seq, optional
param_names : sequence of hashable, optional
Sequence of names for the fittable parameters of `func`. If not supplied,
this will be automatically determined by arguments of `func`. `param_names`
should be manually supplied when fitting a function that takes a variable
number of parameters.
kwargs : dictionary
**kwargs : optional
Additional keyword arguments to passed to scipy curve_fit.
Returns
Expand Down
20 changes: 11 additions & 9 deletions xarray/core/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -5513,9 +5513,11 @@ def diff(self, dim, n=1, label="upper"):
-------
difference : same type as caller
The n-th order finite difference of this object.
.. note::
`n` matches numpy's behavior and is different from pandas' first
argument named `periods`.
Notes
-----
`n` matches numpy's behavior and is different from pandas' first argument named
`periods`.
Examples
--------
Expand Down Expand Up @@ -7137,7 +7139,7 @@ def curvefit(
Parameters
----------
coords : DataArray, str or sequence of DataArray, str
coords : hashable, DataArray, or sequence of hashable or DataArray
Independent coordinate(s) over which to perform the curve fitting. Must share
at least one dimension with the calling object. When fitting multi-dimensional
functions, supply `coords` as a sequence in the same order as arguments in
Expand All @@ -7148,27 +7150,27 @@ def curvefit(
array of length `len(x)`. `params` are the fittable parameters which are optimized
by scipy curve_fit. `x` can also be specified as a sequence containing multiple
coordinates, e.g. `f((x0, x1), *params)`.
reduce_dims : str or sequence of str
reduce_dims : hashable or sequence of hashable
Additional dimension(s) over which to aggregate while fitting. For example,
calling `ds.curvefit(coords='time', reduce_dims=['lat', 'lon'], ...)` will
aggregate all lat and lon points and fit the specified function along the
time dimension.
skipna : bool, optional
Whether to skip missing values when fitting. Default is True.
p0 : dictionary, optional
p0 : dict-like, optional
Optional dictionary of parameter names to initial guesses passed to the
`curve_fit` `p0` arg. If none or only some parameters are passed, the rest will
be assigned initial values following the default scipy behavior.
bounds : dictionary, optional
bounds : dict-like, optional
Optional dictionary of parameter names to bounding values passed to the
`curve_fit` `bounds` arg. If none or only some parameters are passed, the rest
will be unbounded following the default scipy behavior.
param_names : seq, optional
param_names : sequence of hashable, optional
Sequence of names for the fittable parameters of `func`. If not supplied,
this will be automatically determined by arguments of `func`. `param_names`
should be manually supplied when fitting a function that takes a variable
number of parameters.
kwargs : dictionary
**kwargs : optional
Additional keyword arguments to passed to scipy curve_fit.
Returns
Expand Down

0 comments on commit 5b2257e

Please sign in to comment.