Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hooks for custom attribute handling in xarray operations #988

Open
shoyer opened this issue Aug 27, 2016 · 24 comments
Open

Hooks for custom attribute handling in xarray operations #988

shoyer opened this issue Aug 27, 2016 · 24 comments
Labels
topic-metadata Relating to the handling of metadata (i.e. attrs and encoding)

Comments

@shoyer
Copy link
Member

shoyer commented Aug 27, 2016

Over in #964, I am working on a rewrite/unification of the guts of xarray's logic for computation with labelled data. The goal is to get all of xarray's internal logic for working with labelled data going through a minimal set of flexible functions which we can also expose as part of the API.

Because we will finally have all (or at least nearly all) xarray operations using the same code path, I think it will also finally become feasible to open up hooks allowing extensions how xarray handles metadata.

Two obvious use cases here are units (#525) and automatic maintenance of metadata (e.g., cell_methods or history fields). Both of these are out of scope for xarray itself, mostly because the specific logic tends to be domain specific. This could also subsume options like the existing keep_attrs on many operations.

I like the idea of supporting something like NumPy's __array_wrap__ to allow third-party code to finalize xarray objects in some way before they are returned. However, it's not obvious to me what the right design is.

  • Should we lookup a custom attribute on subclasses like __array_wrap__ (or __numpy_ufunc__) in NumPy, or should we have a system (e.g., unilaterally or with a context manager and xarray.set_options) for registering hooks that are then checked on all xarray objects? I am inclined toward the later, even though it's a little slower, just because it will be simpler and easier to get right
  • Should these methods be able to control the full result objects, or only set attrs and/or name?
  • To be useful, do we need to allow extensions to take control of the full operation, to support things like automatic unit conversion? This would suggest something closing to __numpy_ufunc__, which is a little more ambitious than what I had in mind here.

Feedback would be greatly appreciated.

CC @darothen @rabernat @jhamman @pwolfram

@shoyer
Copy link
Member Author

shoyer commented Aug 28, 2016

Let me give concrete examples of what this interface could look like.

To implement units:

from typing import List, Optional  # optional Python 3.5 type annotations

@xarray.register_ufunc_variables_attrs_handler
def propagate_units(results: List[xarray.Variable],
                    context: xarray.UFuncContext) -> Optional[List[dict]]:
    if context.func.__name__ in ['add', 'sub']:
        units_set = set(getattr(arg, 'attrs', {}).get('units') for arg in context.args)
        if len(units_set) > 1:
            raise ValueError('not all input units the same: %r' % units_set)
        units, = units_set
        return [{'units': units}]
    else:
        return [] * len(results)
        # or equivalently, don't return anything at all

Or to (partially) handle cell_methods:

@xarray.register_ufunc_variables_attrs_handler
def add_cell_methods(results, context):
    if context.func.__name__ in ['mean', 'median', 'sum', 'min', 'max', 'std']):
        dims = set(context.args[0].dims) - set(results[0].dims)
        cell_methods = ': '.join(dims) + ': ' + context.func.__name__
        return [{'cell_methods': cell_methods})

Or to implement keep_attrs=True if a function only has one input:

@xarray.register_ufunc_variables_attrs_handler
def always_keep_attrs(results, context):
    if len(context.args) == 1:
        return [context.args[0].attrs] * len(result)

Every time xarray does an operation, we would call all of these registered ufunc_variables_attrs_handlers to get list of attributes to add to result Variable. attrs on the resulting object (or objects if the ufunc has multiple outputs) would be accumulated by calling the handlers in arbitrary order and merging the resulting dicts. Repeated keys in the attrs dicts returned by different handlers would result in an error.

xarray.UFuncContext itself would be a simple struct-like class with a few attributes:

  • func: the function being applied. Typically from NumPy or dask.array, but also could be an arbitrarily callable if a user calls xarray.apply_ufunc directly.
  • args: positional arguments passed into the function. Possibly xarray Variable objects, numpy.ndarray or scalars.
  • kwargs: additional dict of keyword arguments.

Similarly, we would have register_ufunc_dataset_attrs_handler for updating Dataset attrs.

The downside of this approach is that unlike the way NumPy handles things, this doesn't handle conflicting implementations well. If you try to use two different libraries that register their own global attribute handlers instead of using the context manager (e.g., two different units implementations), things will break, even if the unrelated code paths do not touch.

So alternatively to using the registration system, we could support/encourage using a context manager, e.g.,

with xarray.ufunc_variables_attrs_handlers([always_keep_attrs, add_cell_methods]):
     # either augment or ignore other attrs handlers, possibly depending
     # on the choice of a keyword argument to ufunc_variables_attrs_handlers
     result = ds.mean()

It's kind of verbose, but certainly useful for libraries that want to be cautious about breaking other code. In general, it's poor behavior for libraries to unilaterally change unrelated code without an explicit opt-in. So perhaps the best approach is to encourage users to always use a context manager, e.g.,

import contextlib

@contextlib.contextmanager
def my_attrs_context():
    with xarray.ufunc_variables_attrs_handlers(
            [always_keep_attrs, add_cell_methods, ...]):
        yield

with my_attrs_context():
    result = ds.mean() - 0.5 * (ds.max() - ds.min())

So maybe a subclass based implementation (with a custom attribute like __xarray_attrs_handler__) is the cleanest way to handle this, after all.

@darothen
Copy link

I definitely see the logic with regards to encouraging users to use a context manager, and from the perspective of someone building a third-party library on top of xarray it would be fine. However, I think that from the perspective of an end-user (for example, a scientist) crunching numbers and analyzing data with xarray simply as a convenience library, this produces much too obfuscated code - a standard library import (contextlib, which isn't something many scientific coders would regularly use or necessarily know about) and a lot of boiler-plate "enabling" the extra features they want in their calculation.

I think your earlier proposal of an xarray.set_options is a cleaner and simpler way forward, even if it does have thorns. Do you have any estimate of the performance penalty checking hooks on all xarray objects would incur?

@shoyer
Copy link
Member Author

shoyer commented Aug 29, 2016

I agree that end users are likely to set this flag unilaterally, especially for interactive use. That's fine. This could even be OK in a higher level library, though I would encourage requiring an explicit opt in application code.

One thing to consider is whether to allow multiple attribute handlers to be registered simultaneously or not. I kind of like a set_options interface that requires all handlers to be registered at once (as opposed to adding handlers incrementally ), because that ensures conflicts cannot arise inadvertantly.

Either way, I don't think the performance penalty here would be significant in most cases, given how much of Python's dynamic nature xarray already uses.

@robintw
Copy link
Contributor

robintw commented Sep 1, 2016

I also prefer an approach that doesn't use context managers: I agree with @darothen's comments about the issues with their use.

Regardless of the exact implementation, I am strongly in favour of this functionality being added to xarray - I can already think of a number of very useful ways I could use it!

@pwolfram
Copy link
Contributor

pwolfram commented Sep 1, 2016

+1 on requiring attribute handlers to be registered at a single location because this will make for cleaner code long term.

@shoyer
Copy link
Member Author

shoyer commented Sep 1, 2016

So I guess set_options it is, then, with a big warning in the docs discouraging library authors from setting it unilaterally.

I guess we can also start with the attrs only hooks for now and add the others later if/as necessary.

@shoyer
Copy link
Member Author

shoyer commented Feb 17, 2017

Some related discussion that may be of interest to participants here is going on over in #1271.

@gerritholl
Copy link
Contributor

I would say using the units attribute is the most natural way to go. It could be optional and then built on top of pint, which would make it rather easy to implement:

# ureg is a pint unit registry
y = a/b
y.attrs["units"] = ureg(a.attrs["units"]) / ureg(b.attrs["units"])

which if I understand the codebase correctly could be added to DataArray._binary_op. Not sure if it is similarly easy for ufuncs, is that what __numpy_ufunc__ would be for?

@gerritholl
Copy link
Contributor

gerritholl commented Feb 23, 2017

Apparently __numpy_ufunc__ is too new for xarray, but it would appear that adding the right code to __array_wrap__ should work, i.e. if a units attribute is present and units are enabled through pint, evaluate something like new_var.attrs["units"] = context[0](1*ureg(self.attrs["units"])).u .

@shoyer
Copy link
Member Author

shoyer commented Feb 23, 2017

__numpy_ufunc__ hasn't been implemented yet in a released version of NumPy, and when it lands it will probably be renamed __array_ufunc__ (numpy/numpy#5986). See recent discussion on this.

We currently have the binary arithmetic logic in _binary_op, but xarray.core.computation.apply_ufunc has more comprehensive logic, and that's where we should extend things going forward. (The _binary_op logic should really be replaced by calls to apply_ufunc.)

@gerritholl
Copy link
Contributor

Is it not? The documentation says it's new in numpy 1.11 and we're at 1.12 now.

I tried to make a small units-aware subclass of DataArray for myself. I managed to get the right behaviour for ufuncs (I think) but somehow my subclassed _binary_op is not getting called. I guess there is some logic somewhere that leads to replacing _binary_op in a subclass doesn't work (see below). But overall, how would you feel about an optional dependency on pint with a thin layer of code in the right place?

class UnitsAwareDataArray(xarray.DataArray):
    """Like xarray.DataArray, but transfers units
    """

    def __array_wrap__(self, obj, context=None):
        new_var = super().__array_wrap__(obj, context)
        if self.attrs.get("units"):
            new_var.attrs["units"] = context[0](ureg(self.attrs.get("units"))).u
        return new_var

    @staticmethod
    def _binary_op(f, reflexive=False, join=None, **ignored_kwargs):
        # NB: http://stackoverflow.com/a/26807879/974555
        x = super(UnitsAwareDataArray, UnitsAwareDataArray)(f,
            reflexive, join, **ignored_kwargs)
        # do stuff
        return x

@shoyer
Copy link
Member Author

shoyer commented Feb 23, 2017

Is it not? The documentation says it's new in numpy 1.11 and we're at 1.12 now.

Definitely not, I'm afraid. It's gone back and forth several times on master but hasn't landed yet.

@gerritholl
Copy link
Contributor

I wrote a small recipe that appears to contain basic functionality I'm looking for. There's plenty of caveats but it could be a start, if such an approach is deemed desirable at all.

from common import ureg # or ureg = pint.UnitRegistry()

import operator
import xarray
class UnitsAwareDataArray(xarray.DataArray):
    """Like xarray.DataArray, but transfers units
    """

    def __array_wrap__(self, obj, context=None):
        new_var = super().__array_wrap__(obj, context)
        if self.attrs.get("units"):
            new_var.attrs["units"] = context[0](ureg(self.attrs.get("units"))).u
        return new_var

    def _apply_binary_op_to_units(self, func, other, x):
        if self.attrs.get("units"):
            x.attrs["units"] = func(ureg.Quantity(1, self.attrs["units"]),
                                    ureg.Quantity(1, getattr(other, "units", "1"))).u
        return x

    # pow is different because resulting unit depends on argument, not on
    # unit of argument (which must be unitless)
    def __pow__(self, other):
        x = super().__pow__(other)
        if self.attrs.get("units"):
            x.attrs["units"] = pow(ureg.Quantity(1, self.attrs["units"]),
                                   ureg.Quantity(other, getattr(other, "units", "1"))).u
        return x


for tp in ("add", "sub", "mul", "matmul", "truediv", "floordiv", "mod",
    "divmod"):
    meth = "__{:s}__".format(tp)
    def func(self, other, meth=meth, tp=tp):
        x = getattr(super(UnitsAwareDataArray, self), meth)(other)
        return self._apply_binary_op_to_units(getattr(operator, tp), other, x)
    func.__name__ = meth
    print(func, id(func))
    setattr(UnitsAwareDataArray, meth, func)
del func

@lamorton
Copy link

lamorton commented Mar 1, 2017

@gerritholl Interesting! The difficulty I am seeing with this approach is that the units apply only to the main data array, and not the coordinates. In a scientific application, the coordinates are generally physical quantities with units as well. If we want xarray with units to be really useful for scientific computation, we need to have the coordinate arrays be unitful 'quantities' too, rather than tacking the units on as an attribute of xarray.DataArray. I tinkered with making the 'units' attribute into a dictionary, with units for each coordinate (and for the data) as key-value pairs, but it is very cumbersome and goes against my philosophy (for instance, extracting a coordinate from a DataArray leaves it without units).

@gerritholl
Copy link
Contributor

Good point. I didn't think of that; my coordinates happen to be either time or unitless, I think. How common is it though that the full power of a unit library is needed for coordinates? I suppose it arises with indexing, i.e. the ability to write da.sel[x=1.5 km] = value (to borrow PEP 472 syntax ;-), moreso than operations between different data arrays. With a Dataset, the coordinates would correspond to variables with their own attributes, would they not (or how else would a CF-compliant NetCDF file store units for coordinates?) so it would only require a slight expansion of the DataArray class to carry along attributes on coordinates.

When it's a bit more polished I intend to publish it somewhere, but currently several things are missing (.to(...), __rsub__, __rmul__ and friends, unit tests, some other things). I currently don't have time to add features I don't need myself (such as units on coordinates) though.

@lamorton
Copy link

lamorton commented Mar 2, 2017

@gerritholl In my line of work we often deal with 2+1 or 3+1 dimensional datasets (space + time). I have been bitten when I expected space in meters, but it was in centimeters, or time in seconds but it was in milliseconds. Also, I would like to improve the plotting functionality so that publication-quality plots can be made directly by automatically including units in the axis labels (and while I'm wishing for a pony, there could be pretty-printing versions of coordinate names (ie, LaTeX symbols or something)).

@gerritholl
Copy link
Contributor

We do often deal with those in my line of work as well, I just happen not to right now. But time is the one thing that already carries units, doesn't it? One can convert between various datetime64 objects and adding, subtracting, dividing timedelta64 with different units mostly works as expected (except integer division; and I haven't tried indexing with timedelta64). But I take your point about unit coordinates, and I still like the idea to provisionally add such functionality on top of an optional dependency on pint, which already has the ability to write out siunitx-latex code which then can be incorporated into plotting tools (I haven't grasped xarrays plotting tools enough yet to know how easy or difficult that part would be, though). I don't see custom dtypes with unit incorporation coming any time soon to numpy and I'm not even sure it would be the right way to go (any real dtype can have any unit; but here is not the right place to discuss that).

@shoyer
Copy link
Member Author

shoyer commented Mar 6, 2017

There's some chance that __numpy_ufunc__ (or more likely __array_ufunc__) will finally arrive in time for the next release of NumPy, which could make it easier to handle this overloading (e.g., to allow for wrapping non-NumPy arrays inside xarray objects).

In general, this is a pretty tough design problem, which explains why it hasn't been solved yet :). But I was pretty happy with the way our __numpy_ufunc__ discussions were going.

Speaking of PEP 472, if someone has energy to push on that, it would be really awesome to see that happen.

@peterkamatej
Copy link

Hi, I would really appreciate having some simple way to store some persisting attributes in xarray objects. One typical use case in my interactive work is that I have functions that have a DataArray or Dataset object as their only input, they perform some processing on the data and return an updated Dataset. Some additional information needed for this processing, such as an integer identifier of the experiment from which the data come from, are stored as attributes, so that when I have multiple instances of the same kind of objects, I will not mess up data from different experiments together.

@dopplershift
Copy link
Contributor

@shoyer I know elsewhere you said you weren't sure about this idea any more, but personally I'd like to push forward on this idea. Do you have problems with this approach we need to resolve? Any chance you have some preliminary code?

I think this is the right way to solve the unit issue in XArray, since at it's core unit handling is mostly a metadata operation.

@shoyer
Copy link
Member Author

shoyer commented Aug 16, 2018

__array_ufunc__ is now available in recent NumPy releases, and recently, I've been pushing on a NumPy enhancement proposal for __array_function__ (which is near final approval now): http://www.numpy.org/neps/nep-0018-array-function-protocol.html

I think these overloads are a much more maintainable way to add features like unit handling into xarray, as outlined in our development roadmap. It's not a complete system for overloading attribute handling in attrs, but I think it could need most of what users would need and is better than writing a separate hooks system for xarray only.

@dopplershift
Copy link
Contributor

I see your argument, but here's my problem. In this future where things work (assuming that NEP is accepted), and I want distributed computing with dask, units, and xarray, I have: xarray wrapping a pint array wrapping a dask array. I like composition, but that level of wrapping...feels wrong to me for some reason. Is there some elegance I'm missing here? (Other than array-like things playing together.)

And then I still need hooks in xarray so that when pint does a calculation, it can update the metadata in xarray; so it feels like we're back here anyway.

@shoyer
Copy link
Member Author

shoyer commented Aug 17, 2018

xarray wrapping a pint array wrapping a dask array

Yep, this is pretty much what I was thinking of.

I like composition, but that level of wrapping...feels wrong to me for some reason. Is there some elegance I'm missing here? (Other than array-like things playing together.)

The virtue of this approach vs setting an global "attribute handler" (as suggested here) is that everything is controlled locally. For example, suppose people want to plug in two separate unit systems into xarray (e.g., pint and unyt). If the unit handling is determined by the specific arrays, then libraries relying on both approaches internally can happily co-exist and even call each other.

In principle, this could be done safely with global handlers if you always know exactly when to switch back and forth, but that requires explicitly switching on handlers for even basic arithmetic. I doubt most users are going to bother, which is going to make using multiple tools that make use of this feature really hard.

The other big advantage is that you only have to write the bulk of the unit system once, e.g., to define operations on NumPy arrays.

And then I still need hooks in xarray so that when pint does a calculation, it can update the metadata in xarray; so it feels like we're back here anyway.

Rather than struggling to keep attrs up to date, I think it would be more consistent with the rest of xarray (e.g., our handling of time units) to include units explicitly in the data model.

We could still do some work on the xarray side to make this easy to use. Specifically:

  • The DataArray.units property could forward to DataArray.data.units.
  • A DataArray.to or DataArray.convert method could call the relevant method on data and re-wrap it in a DataArray.
  • A minimal layer on top of xarray's netCDF IO could handle unit attributes by wrapping/unwrapping arrays with pint.

@eric-wieser
Copy link

eric-wieser commented Apr 14, 2019

Speaking of PEP 472, if someone has energy to push on that, it would be really awesome to see that happen.

Looks like that ship has sailed, PEP-472 has been rejected

@TomNicholas TomNicholas added the topic-metadata Relating to the handling of metadata (i.e. attrs and encoding) label Apr 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic-metadata Relating to the handling of metadata (i.e. attrs and encoding)
Projects
None yet
Development

No branches or pull requests

10 participants