Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add typing to unary and binary arithmetic operators #4904

Merged
merged 13 commits into from
Apr 14, 2021

Conversation

rhkleijn
Copy link
Contributor

@rhkleijn rhkleijn commented Feb 13, 2021

This draft PR is for soliciting feedback on an approach to adding typing to unary and binary operators for xarray classes.
The approach involves generating a stub file using a simple script.

In order to untangle the various helper classes implementing arithmetic throughout the code base I have refactored the code, thereby getting rid of the injection of the operators after class definition. It now inserts the operators during construction of the subclasses leveraging Python's __init_subclass__ mechanism.

For this example python file:

from typing import TYPE_CHECKING

from xarray import DataArray  # noqa: F401
from xarray import Dataset  # noqa: F401
from xarray import Variable  # noqa: F401

ds = Dataset({"v": ("x", [2])})
da = DataArray([3], dims="x")
var = Variable("x", [4])

ds00 = -ds
ds01 = ds + ds
ds02 = da - ds
ds03 = ds * da
ds04 = var / ds
ds05 = ds // var
ds06 = ds & 1
ds07 = 2 | ds
ds08 = ds == ds
ds09 = ds != da
ds10 = da <= ds
ds11 = ds > var
ds12 = var >= ds

da0 = +da
da1 = da % da
da2 = da ** var
da3 = var ^ da
da4 = da & 123
da5 = 1.23 * da
da6 = da == da
da7 = da != var
da8 = var < da

var0 = ~var
var1 = var + var
var2 = var & 1
var3 = 3 ^ var
var4 = var != var

if TYPE_CHECKING:
    reveal_locals()  # noqa: F821
for k, v in tuple(vars().items()):
    if k[0].islower():
        print(k, type(v))

the status quo gives this mypy output:

temp.py:11: error: Unsupported operand type for unary - ("Dataset")
temp.py:18: error: Unsupported operand types for | ("int" and "Dataset")
temp.py:25: error: Unsupported operand type for unary + ("DataArray")
temp.py:30: error: Unsupported operand types for * ("float" and "DataArray")
temp.py:35: error: Unsupported operand type for ~ ("Variable")
temp.py:38: error: Unsupported operand types for ^ ("int" and "Variable")
temp.py:42: note: Revealed local types are:
temp.py:42: note:     TYPE_CHECKING: builtins.bool
temp.py:42: note:     da: xarray.core.dataarray.DataArray
temp.py:42: note:     da0: Any
temp.py:42: note:     da1: Any
temp.py:42: note:     da2: Any
temp.py:42: note:     da3: Any
temp.py:42: note:     da4: Any
temp.py:42: note:     da5: builtins.float
temp.py:42: note:     da6: Any
temp.py:42: note:     da7: Any
temp.py:42: note:     da8: Any
temp.py:42: note:     ds: xarray.core.dataset.Dataset
temp.py:42: note:     ds00: Any
temp.py:42: note:     ds01: Any
temp.py:42: note:     ds02: Any
temp.py:42: note:     ds03: Any
temp.py:42: note:     ds04: Any
temp.py:42: note:     ds05: Any
temp.py:42: note:     ds06: Any
temp.py:42: note:     ds07: builtins.int
temp.py:42: note:     ds08: Any
temp.py:42: note:     ds09: Any
temp.py:42: note:     ds10: Any
temp.py:42: note:     ds11: Any
temp.py:42: note:     ds12: Any
temp.py:42: note:     var: xarray.core.variable.Variable
temp.py:42: note:     var0: Any
temp.py:42: note:     var1: Any
temp.py:42: note:     var2: Any
temp.py:42: note:     var3: builtins.int
temp.py:42: note:     var4: Any
Found 6 errors in 1 file (checked 127 source files)

With this PR the output now becomes:

temp.py:42: note: Revealed local types are:
temp.py:42: note:     da: xarray.core.dataarray.DataArray
temp.py:42: note:     da0: xarray.core.dataarray.DataArray*
temp.py:42: note:     da1: xarray.core.dataarray.DataArray*
temp.py:42: note:     da2: xarray.core.dataarray.DataArray*
temp.py:42: note:     da3: xarray.core.dataarray.DataArray*
temp.py:42: note:     da4: xarray.core.dataarray.DataArray*
temp.py:42: note:     da5: xarray.core.dataarray.DataArray*
temp.py:42: note:     da6: xarray.core.dataarray.DataArray*
temp.py:42: note:     da7: xarray.core.dataarray.DataArray*
temp.py:42: note:     da8: xarray.core.dataarray.DataArray*
temp.py:42: note:     ds: xarray.core.dataset.Dataset
temp.py:42: note:     ds00: xarray.core.dataset.Dataset*
temp.py:42: note:     ds01: xarray.core.dataset.Dataset*
temp.py:42: note:     ds02: xarray.core.dataset.Dataset*
temp.py:42: note:     ds03: xarray.core.dataset.Dataset*
temp.py:42: note:     ds04: xarray.core.dataset.Dataset*
temp.py:42: note:     ds05: xarray.core.dataset.Dataset*
temp.py:42: note:     ds06: xarray.core.dataset.Dataset*
temp.py:42: note:     ds07: xarray.core.dataset.Dataset*
temp.py:42: note:     ds08: xarray.core.dataset.Dataset*
temp.py:42: note:     ds09: xarray.core.dataset.Dataset*
temp.py:42: note:     ds10: xarray.core.dataset.Dataset*
temp.py:42: note:     ds11: xarray.core.dataset.Dataset*
temp.py:42: note:     ds12: xarray.core.dataset.Dataset*
temp.py:42: note:     var: xarray.core.variable.Variable
temp.py:42: note:     var0: xarray.core.variable.Variable*
temp.py:42: note:     var1: xarray.core.variable.Variable*
temp.py:42: note:     var2: xarray.core.variable.Variable*
temp.py:42: note:     var3: xarray.core.variable.Variable*
temp.py:42: note:     var4: xarray.core.variable.Variable*

@max-sixty
Copy link
Collaborator

Thanks a lot @rhkleijn , this is very impressive! The results look great. The __init_subclass__ approach seems dominant over our existing approach.

Did you ever consider replacing the injections with this macro-like approach to writing out the python methods, rather than using the macro to generate the stub files? (I don't bring this up with a view that it's better, but the injections have always seemed awkward)

Is there anything we should be aware of around the interactions between stub files and inline definitions?

Others will have useful thoughts too, so would be interested to hear from them too.

@mathause
Copy link
Collaborator

Thanks for the PR & your work. I'll need some time to digest that all... One thing I certainly would like to fix before merging this is #4881 (and merge #4878). I'd also be interested in your motivation for the stub files. (It's just that we don't have any of these so far).

I am sure you saw this comment:

TODO(shoyer): rewrite this module, making use of xarray.core.computation,
NumPy's __array_ufunc__ and mixin classes instead of the unintuitive "inject"
functions.

so there would be an argument to create mixin-classes*. Another motivation would be that currently (with the inject approach) mypy does not know which methods DataArray actually has. This is kind of hidden by __getattr__ (see #4601 & #4616). Now I am not saying you have to do this here but also wanted to mention it.

* Well I just realised that your stub files are kind of mixin classes...

  • What are all the # type: ignore[misc] you are suppressing?
  • I was slightly confused by the T_ of T_Other etc because this is not a TypeVar (but an alias) so I'd prefer another name.
  • For T_Dataset = TypeVar("T_Dataset", bound=Dataset) etc - why do you use a bound? To allow subclassing? As we discourage subclassing we may have to discuss if we should allow this.

@rhkleijn
Copy link
Contributor Author

Did you ever consider replacing the injections with this macro-like approach to writing out the python methods, rather than using the macro to generate the stub files? (I don't bring this up with a view that it's better, but the injections have always seemed awkward)

I did not consider it but that is a very promising idea. It is probably feasible to adapt the script to generate the methods themselves together with their type hints into a regular Python module instead of a stub file. I will give it a try this week.

It would have the same disadvantage of needing the script for generating the file. There are several advantages: the stub file approach which may feel a bit odd would not even be necessary, it could replace the current injection logic, implementation and type hints would be in a single place and it could simplify the class hierarchy a bit.

Is there anything we should be aware of around the interactions between stub files and inline definitions?

The mypy documentation states that if a directory contains both a .py and a .pyi file for the same module, the .pyi file takes precedence. It is my understanding that stub files work on a file by file basis and you cannot mix them for a single file. There should be no interactions here, as a _typed_ops.py module doesn't even exist in this case, only the _typed_ops.pyi stub file.

@rhkleijn
Copy link
Contributor Author

Thanks for the PR & your work. I'll need some time to digest that all... One thing I certainly would like to fix before merging this is #4881 (and merge #4878).

I agree. Both PRs you mention would be nice to see merged. I was still using numpy 1.19.

I'd also be interested in your motivation for the stub files. (It's just that we don't have any of these so far).

I usually prefer inline type hints. The motivation for stub files was mostly because all other approaches I tried where not sufficient, see my comment in #4054. It turns out we might not even need a stub file, see my reply above to max-sixty's comments.

I am sure you saw this comment: ... so there would be an argument to create mixin-classes*. Another motivation would be that currently (with the inject approach) mypy does not know which methods DataArray actually has. This is kind of hidden by __getattr__ (see #4601 & #4616). Now I am not saying you have to do this here but also wanted to mention it.

  • Well I just realised that your stub files are kind of mixin classes...

That is correct. It can currently be considered a mixin class during type checking and a no-op at compile and runtime. If I can get max-sixty's suggestion to work, then we would quite literally be following up on that TODO by using mixin classes instead of the unintuitive "inject" functions.

What are all the # type: ignore[misc] you are suppressing?

Those are mypy warning for overlapping overload signatures with incompatible return types. In practice it is not a problem as the first matching overload gets picked. One reason for overlapping is that with numpy <=1.20 numpy.ndarray is typed as Any, which makes it overlap with other overload signatures. Another one is due to mypy checking the consistency of binops and their reflexive counterparts.

I was slightly confused by the T_ of T_Other etc because this is not a TypeVar (but an alias) so I'd prefer another name.

Agreed. What name would work in this case? Something like OtherType or just Other (or ScalarOrArray)? Anything else?

For T_Dataset = TypeVar("T_Dataset", bound=Dataset) etc - why do you use a bound? To allow subclassing? As we discourage subclassing we may have to discuss if we should allow this.

I think it would be nice to allow some simple kinds of subclassing (some (many?) methods already preserve the subclass in the return value but the type hint mentions the base class as return type). The particular use case I have in mind involves having a custom Accessor registered with xr.register_dataarray_accessor. Currently I have found no way to add meaningful type hints for these dynamically added attributes on the DataArray or Dataset classes, and I always wish I had signature information and type hints available when working on code using the accessor. One way this could be achieved is by using a simple wrapper class during type checking which includes a type declaration to signal both the existence and the type of the accessor, e.g. something along the lines of

import xarray as xr

if TYPE_CHECKING:
    class Dataset(xr.Dataset):
        myaccessor: MyAccessor
else:
    Dataset = xr.Dataset  # note that at runtime we just work with xr.Dataset

ds = Dataset(...)
ds2 = (ds + 1).sel(x=1)  # some operations (ideally preserving subclass!)

# In the line below I would like my IDE to be able to show me the docstrings, 
# methods signatures and typing information for `ds2.myaccessor` and for 
# `ds2.myaccessor.method` and the inferred type of `result`

result = ds2.myaccessor.method()  

I will also bring this use case up in #3980. Until that is resolved, I am open to here use either bound TypeVars or just "Dataset" and "DataArray".

@mathause
Copy link
Collaborator

Thanks for your thought-out answer. That all sounds good to me.

overlapping overload signatures with incompatible return types

So that works now? I remember vaguely that @max-sixty (?) had an issue with this pattern once.

Agreed. What name would work in this case? Something like OtherType or just Other (or ScalarOrArray)? Anything else?

I vote for ScalarOrArray.

Currently I have found no way to add meaningful type hints for these dynamically added attributes on the DataArray or Dataset classes

Yes, I think that's a problem of the __setattr__ and __getattr__ pattern. I don't have a strong opinion on the subclassing issue & both (bound TypeVars or just "Dataset") are fine for me.

@mathause
Copy link
Collaborator

mathause commented Mar 3, 2021

Just FYI #4881 and #4878 are done

@max-sixty
Copy link
Collaborator

@rhkleijn I'm a bit late to respond here, but that could be really excellent IMO, I'd be really interested to see that working. Let me know if there's anything from my end that would be helpful.

@rhkleijn
Copy link
Contributor Author

rhkleijn commented Mar 4, 2021

I vote for ScalarOrArray.

Done

Just FYI #4881 and #4878 are done

Locally it seems to work fine. Let's see what happens in CI now.

Let me know if there's anything from my end that would be helpful.

Not yet. I am working on generating the methods themselves on a local branch. It required changing all _binary_op, _unary_op and _inplace_binary_op methods from their current closure approach which wraps the operator to just exposing the inner function to be called by the generated operators. With mypy and VS Code/pylance/pyright it seems mostly fine but I have not yet come around to ironing out the wrinkles with PyCharm typing. Once I have done that I'll share it and come back to ask for your feedback and opinion on that approach.

@rhkleijn
Copy link
Contributor Author

rhkleijn commented Mar 13, 2021

You can now view the branch I mentioned in my previous message at master...rhkleijn:gen-ops . It would get rid of the need for dynamically generating (injecting) unary and binary operators at runtime.

Note that its script generates both the module containing the arithmetic methods and its accompanying stub file. This setup turned out to be more robust than both:

  • the approach taken in this PR so far (dynamically adding the methods at runtime and generating only a stub file). PyCharm would be happy with the typing information but it turns out that it does complain about missing methods.
  • directly attaching the typing information to the methods in the generated module. I have tried it but PyCharm would not handle the @overloads correctly in that case.

In all cases mypy and pylance work nicely.

I prefer the rhkleijn:gen-ops branch over the rhkleijn:ops-typing branch on which this PR is based, but I am curious to hear your opinions and further guidance on it.

Also, I am not sure what to do with the rhkleijn:gen-ops branch. Should I open a new (draft) PR here from the rhkleijn:gen-ops branch or would it be better to merge it into rhkleijn:ops-typing so it would appear here in this PR?

@max-sixty
Copy link
Collaborator

The gen-ops branch looks really cool @rhkleijn .

I have a few questions, but they don't need to hold back merging — if others agree — in particular on the .py and .pyi design — we could merge now and then iterate.

  • Above you mentioned:

    The mypy documentation states that if a directory contains both a .py and a .pyi file for the same module, the .pyi file takes precedence. It is my understanding that stub files work on a file by file basis and you cannot mix them for a single file.

    And here we have both a xarray/core/_typed_ops.pyi and xarray/core/_typed_ops.py. Does that mean the .pyi file is obscuring the .py file?

  • Do you know the extent to which PyCharm not picking up the @overloads is because PyCharm hasn't implemented a PEP yet, vs. an issue with the proposed code? If PyCharm is going to catch up soon, I don't think we need to engineer around it.

  • Python typing has run ahead of me recently, so forgive the uninformed question, but do you know whether it's possible to define a parent class to be generic over a type, and avoid repeating methods like this for each Mixin class?

      def __neg__(self: T_Dataset) -> T_Dataset: ...
      def __pos__(self: T_DataArray) -> T_DataArray: ...
      def __neg__(self: T_Variable) -> T_Variable: ...

    Or are there only a few of these that are common across the classes, such that it's not worth having a parent class for the Mixins?

  • Do we need to define methods for the DataArray's ops with a Dataset — e.g. here — or can we rely on Datasets implementation? I think that's how the runtime handles it.

  • Do we need to define these NoReturn methods? I would have thought that if there weren't a valid type signature we'd get an error anyway.

Using python to template the methods works pretty well! Better than I would have expected relative to a templating library like jinja.

@rhkleijn
Copy link
Contributor Author

The gen-ops branch looks really cool @rhkleijn .

thanks!

I have a few questions, but they don't need to hold back merging — if others agree — in particular on the .py and .pyi design — we could merge now and then iterate.

  • Above you mentioned:
    The mypy documentation states that if a directory contains both a .py and a .pyi file for the same module, the .pyi file takes precedence. It is my understanding that stub files work on a file by file basis and you cannot mix them for a single file.

    And here we have both a xarray/core/_typed_ops.pyi and xarray/core/_typed_ops.py. Does that mean the .pyi file is obscuring the .py file?

It takes precedence only during type checking as the source for static type information. The generated .py and .pyi files are by their construction in sync and since the .py file does not contain typing information nothing gets obscured here. At runtime the .pyi file is ignored.

  • Do you know the extent to which PyCharm not picking up the @overloads is because PyCharm hasn't implemented a PEP yet, vs. an issue with the proposed code? If PyCharm is going to catch up soon, I don't think we need to engineer around it.

There seem to be quite some open issues here with no indication they currently are actively working on fixing them.
I am not using PyCharm myself, but as it is widely used I would also expect quite some xarray users to be using it.

  • Python typing has run ahead of me recently, so forgive the uninformed question, but do you know whether it's possible to define a parent class to be generic over a type, and avoid repeating methods like this for each Mixin class?
      def __neg__(self: T_Dataset) -> T_Dataset: ...
      def __pos__(self: T_DataArray) -> T_DataArray: ...
      def __neg__(self: T_Variable) -> T_Variable: ...
    Or are there only a few of these that are common across the classes, such that it's not worth having a parent class for the Mixins?

For these unary ops with simple type hints if would probably work but for most other methods with multiple overloads the typing ecosystem is not yet mature enough. In this issue I have commented on it. I hope one day we will be able to refactor this into some generic parent classes (one for unary ops, one for binary (and reflexive) ops and one for inplace ops) from which we can inherit. Using a generic parent class only for the unary ops and not for the others would make the script more complicated. For now, I'd say this somewhat low-tech solution gets the job done.

  • Do we need to define methods for the DataArray's ops with a Dataset — e.g. here — or can we rely on Datasets implementation? I think that's how the runtime handles it.

Unfortunately we have to define them both ways for typing purposes. This comment in generate_ops.py contains some explanation:

# Note: in some of the overloads below the return value in reality is NotImplemented,
# which cannot accurately be expressed with type hints,e.g. Literal[NotImplemented]
# or type(NotImplemented) are not allowed and NoReturn has a different meaning.
# In such cases we are lending the type checkers a hand by specifying the return type
# of the corresponding reflexive method on `other` which will be called instead.

Also here, my hope is that someday this will not be needed anymore.

  • Do we need to define these NoReturn methods? I would have thought that if there weren't a valid type signature we'd get an error anyway.

The NoReturn reflects the exception raised from the GroupBy._binary_op method here.
It might help users trying to e.g. multiply a GroupBy object and a Variable. With NoReturn mypy correctly errors in that case. Without NoReturn mypy incorrectly infers the result to be Variable. It seems to be shortcoming of mypy, since Pylance does infer Any in that case (which doesn't rule out a NoReturn).

Using python to template the methods works pretty well! Better than I would have expected relative to a templating library like jinja.

I was also pleasantly surprised by it. And even black leaves the generated files unchanged.

@max-sixty
Copy link
Collaborator

Thanks a lot for the full response!

I forgot to reply to one question of yours — let's use this PR, if you want to replace the current code with the gen_ops branch.

It takes precedence only during type checking as the source for static type information.

Ofc, I see, there's no type information in the .py file. Out of interest, was there a reason we don't make a single .py file with the typing info too? (tbc, no need to redo!)

Unfortunately we have to define them both ways for typing purposes.

Thanks for the comments — even better to have them in the code. I still have one question — if we remove a line like this, does mypy not recognize it can use the method on a Dataset?

 class DataArrayOpsMixin:
     def __add__(self, other: T_Dataset) -> T_Dataset: ...

With NoReturn mypy correctly errors in that case. Without NoReturn mypy incorrectly infers the result to be Variable.

That's surprising — I wonder where it's getting that information from. I would have expected that it wouldn't be able to find any compatible implementation of a GroupBy + Variable.

There seem to be quite some open issues here with no indication they currently are actively working on fixing them.
I am not using PyCharm myself, but as it is widely used I would also expect quite some xarray users to be using it.

I would vote not to prioritize a tool if it's not compatible — this code will last for a long time, and I expect PyCharm will catch up to the language specs soon.

For these unary ops with simple type hints if would probably work but for most other methods with multiple overloads the typing ecosystem is not yet mature enough. In this issue I have commented on it.

Great, thank you

@rhkleijn
Copy link
Contributor Author

rhkleijn commented Mar 15, 2021

I forgot to reply to one question of yours — let's use this PR, if you want to replace the current code with the gen_ops branch.

done

Out of interest, was there a reason we don't make a single .py file with the typing info too? (tbc, no need to redo!)

I have tried that but PyCharm would not handle the @overloads correctly in that case.

I still have one question — if we remove a line like this, does mypy not recognize it can use the method on a Dataset?

 class DataArrayOpsMixin:
     def __add__(self, other: T_Dataset) -> T_Dataset: ...

If we remove that line, it incorrectly infers the result of da + ds to be of type DataArray. Mypy does not recognize that it could fallback to the reflexive method on Dataset. Pylance does get it right.

With NoReturn mypy correctly errors in that case. Without NoReturn mypy incorrectly infers the result to be Variable.

That's surprising — I wonder where it's getting that information from. I would have expected that it wouldn't be able to find any compatible implementation of a GroupBy + Variable.

Agreed that this is surprising behaviour. I suspect mypy might have something like an implicit internal rule to fallback to the type of the first operand of a binary operator. That would be consistent with the behaviours of both of the above cases.

There seem to be quite some open issues here with no indication they currently are actively working on fixing them.
I am not using PyCharm myself, but as it is widely used I would also expect quite some xarray users to be using it.

I would vote not to prioritize a tool if it's not compatible — this code will last for a long time, and I expect PyCharm will catch up to the language specs soon.

I can relate to that. On the other hand, there may also be a pragmatic reason to go the extra mile. If we don't it may result in users thinking xarray's newly added typing is to blame for unexpected type inferences and reporting issues here. When at some point in the future these tools indeed do catch up it is straightforward to adapt the script and regenerate.

Copy link
Collaborator

@mathause mathause left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had another look at the PR - I think it also makes it easier to understand where the unary & binary operations are coming from. The next step will be to get rid of the other inject_* functions (not here).

I suggest you add an entry to whats-new and mark it as "Ready for review". Would be good to get another pair of eyes on it.

xarray/core/_typed_ops.pyi Show resolved Hide resolved

Scalar = Union[int, float, complex, bytes, str, np.generic]
ArrayLike = Union[np.ndarray, DaskArray]
ScalarOrArray = Union[Scalar, ArrayLike]
Copy link
Collaborator

@mathause mathause Mar 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following works

da = xr.DataArray([1, 2, 3])
da + [1, 2, 3]

so I wonder if you have to add ArrayLike from np.typing here (need to add it to core/npcompat.py)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea! I have now added it to npcompat.py including a fallback for installation with older numpy versions. The fallback is based on code from numpy 1.20. Numpy's ArrayLike already includes Python scalars so that simplifies the type definitions we need at the top of the generated file.

xarray/core/_typed_ops.pyi Show resolved Hide resolved
xarray/core/arithmetic.py Show resolved Hide resolved
@@ -121,20 +116,6 @@
None. Changed in version 0.17.0: if specified on an integer array
and skipna=True, the result will be a float array."""

_COARSEN_REDUCE_DOCSTRING_TEMPLATE = """\
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where did that go? Or was this superfluous?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was not used.

@dcherian dcherian requested a review from shoyer March 19, 2021 16:21
@max-sixty
Copy link
Collaborator

@rhkleijn I missed your last comment, apologies.

When at some point in the future these tools indeed do catch up it is straightforward to adapt the script and regenerate.

That makes sense, and it sounds like you've checked these cases in lots of detail, thank you for all your hard work. If you want, add a comment to the code that references your comments here, so there's some record of the research you've done. (Optional ofc)

Let's answer @mathause questions, and then you should feel free to merge unless there are any objections — we may learn more from others using it — and very easy to iterate more from here.

@shoyer
Copy link
Member

shoyer commented Mar 19, 2021

I'll have to take a closer look, but overall this looks amazing! Both a nice internal clean-up and a valuable feature for Xarray users.

@rhkleijn rhkleijn marked this pull request as ready for review March 21, 2021 00:12
@max-sixty
Copy link
Collaborator

Thanks for having a glance @shoyer . Let us know when it's OK to merge!

@max-sixty
Copy link
Collaborator

I'll merge this in the next couple of days unless I hear anything! Thanks everyone

@pep8speaks
Copy link

Hello @rhkleijn! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 3:1: F401 'functools' imported but unused
Line 7118:1: W293 blank line contains whitespace

@max-sixty
Copy link
Collaborator

max-sixty commented Apr 4, 2021

FYI I resolved conflicts in GitHub; I may have made a mistake, tests should tell us though

@rhkleijn
Copy link
Contributor Author

rhkleijn commented Apr 5, 2021

Thanks, there was one more minor merge artifact. Should be fixed now.

@max-sixty max-sixty merged commit 2896a80 into pydata:master Apr 14, 2021
@max-sixty
Copy link
Collaborator

@rhkleijn thanks for the huge effort! We just discussed this on our team call and were universally impressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Type checking fails for multiplication
5 participants