-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Awkward array backend? #4285
Comments
I'm linking myself here, to follow this: @jpivarski. |
I think that xarray should offer a "compatibility test toolkit" to any numpy-like, NEP18-compatible library that wants to integrate with it. import xarray
import sparse
xarray.testing.test_nep18_module(
sparse,
# TODO: lambda to create an array
# TODO: list of xfails
) which would automatically expand into a comprehensive suite of tests thanks to pytest parameterize/fixture magic. |
Copied from https://gitter.im/pangeo-data/Lobby : I've been using Xarray with argopy recently, and the immediate value I see is the documentation of columns, which is semi-lacking in Awkward (one user has been passing this information through an Awkward tree as a scikit-hep/awkward#422). I should also look into Xarray's indexing, which I've always seen as being the primary difference between NumPy and Pandas; Awkward Array has no indexing, though every node has an optional Identities which would be used to track such information through Awkward manipulations—Identities would have a bijection with externally supplied indexes. They haven't been used for anything yet. Although the elevator pitch for Xarray is "n-dimensional Pandas," it's rather different, isn't it? The contextual metadata is more extensive than anything I've seen in Pandas, and Xarray can be partitioned for out-of-core analysis: Xarray wraps Dask, unlike Dask's array collection, which wraps NumPy. I had troubles getting Pandas to wrap Awkward array (scikit-hep/awkward#350 ), but maybe these won't be issues for Xarray. One last thing (in this very rambly message): the main difficulty I think we would have in that is that Awkward Arrays don't have shape and dtype, since those define a rectilinear array of numbers. The data model is Datashape plus union types. There is a sense in which ndim is defined: the number of nested lists before reaching the first record, which may split it into different depths for each field, but even this can be ill-defined with union types: >>> import awkward1 as ak
>>> array = ak.Array([1, 2, [3, 4, 5], [[6, 7, 8]]])
>>> array
<Array [1, 2, [3, 4, 5], [[6, 7, 8]]] type='4 * union[int64, var * union[int64, ...'>
>>> array.type
4 * union[int64, var * union[int64, var * int64]]
>>> array.ndim
-1 So if we wanted to have an Xarray of Awkward Arrays, we'd have to take stock of all the assumptions Xarray makes about the arrays it contains. |
So I actually think we can do this, with some caveats. I recently found a cool dataset with ragged-like data which has rekindled my interest in this interfacing, and given me a real example to try it out with. As far as I understand it the main problem is that awkward arrays don't define a Conceptually though, it seems to me that Let's take an Awkward array that can be coerced directly to a numpy array: In [27]: rect = ak.Array([[1, 2, 3], [4, 5, 6]])
...: rect
Out[27]: <Array [[1, 2, 3], [4, 5, 6]] type='2 * var * int64'>
In [28]: np.array(rect)
Out[28]:
array([[1, 2, 3],
[4, 5, 6]]) Here there is a clear correspondence: the first axis of the awkward array has length 2, and because in this case the second axis has a consistent length of 3, we can coerce this to a numpy array with Now imagine a "ragged" (or "jagged") array, which is like a numpy array except that the lengths along one (or more) of the axes can be variable. Awkward allows this, e.g. In [29]: ragged = ak.Array([[1, 2, 3, 100], [4, 5, 6]])
...: ragged
Out[29]: <Array [[1, 2, 3, 100], [4, 5, 6]] type='2 * var * int64'> but a direct coercion to numpy will fail. However we still conceptually have a "shape". It's either In the second case you can still read off the dtype too. However awkward also allows "Union types", which basically means that one array can contain data of multiple numpy dtypes. Unfortunately this seems to completely break the numpy / xarray model, but we can completely ignore this problem if we simply say that xarray should only try to wrap awkward arrays with non-Union types. I think that's okay - a ragged-length array with a fixed dtype would still be extremely useful! So if we want to wrap an (non-union type) awkward array instance like
This seems hard. Xarray's whole model is built assuming that It would also mean a big change to xarray in order to support one unusual type of array, that goes beyond the data API standard. That breaks xarray's general design philosophy of providing a general wrapper and delegating to domain-specific array implementations / backends / etc. for specificity.
This doesn't seem as hard, at least for non-union type awkward arrays. In fact this crude monkey-patching seems to mostly work: In [1]: from awkward import Array, num
...: import numpy as np
In [2]: def get_dtype(self) -> np.dtype:
...: if "Union" in str(self.type):
...: raise ValueError("awkward arrays with Union types can't be expressed in terms of a single numpy dtype")
...:
...: datatype = str(self.type).split(" * ")[-1]
...:
...: if datatype == "string":
...: return np.dtype("str")
...: else:
...: return np.dtype(datatype)
...:
In [3]: def get_shape(self):
...: if "Union" in str(self.type):
...: raise ValueError("awkward arrays with Union types can't be expressed in terms of a single numpy dtype")
...:
...: lengths = str(self.type).split(" * ")[:-1]
...:
...: for axis in range(self.ndim):
...: if lengths[axis] == "var":
...: lengths[axis] = np.max(num(self, axis))
...: else:
...: lengths[axis] = int(lengths[axis])
...:
...: return tuple(lengths)
...:
In [4]: def get_size(self):
...: return np.prod(get_shape(self))
...:
In [5]: setattr(Array, 'dtype', property(get_dtype))
...: setattr(Array, 'shape', property(get_shape))
...: setattr(Array, 'size', property(get_size)) Now if we make the same ragged array but with the monkey-patched class, we have a sensible return value for In [6]: ragged = Array([[1, 2, 3, 100], [4, 5, 6]])
In [7]: import xarray as xr
In [8]: da = xr.DataArray(ragged, dims=['x', 't'])
In [17]: da
Out[17]:
<xarray.DataArray (x: 2, t: 4)>
<Array [[1, 2, 3, 100], [4, 5, 6]] type='2 * var * int64'>
Dimensions without coordinates: x, t
In [18]: da.dtype
Out[18]: dtype('int64')
In [19]: da.size
Out[19]: 8
In [20]: da.shape
Out[20]: (2, 4) Promising... Let's try indexing: In [21]: da.isel(t=2)
Out[21]:
<xarray.DataArray (x: 2)>
<Array [3, 6] type='2 * int64'>
Dimensions without coordinates: x
In [22]: da.isel(t=4)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [22], in <cell line: 1>()
----> 1 da.isel(t=4)
...
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/highlevel.py:991, in Array.__getitem__(self, where)
579 """
580 Args:
581 where (many types supported; see below): Index of positions to
(...)
988 have the same dimension as the array being indexed.
989 """
990 if not hasattr(self, "_tracers"):
--> 991 tmp = ak._util.wrap(self.layout[where], self._behavior)
992 else:
993 tmp = ak._connect._jax.jax_utils._jaxtracers_getitem(self, where)
ValueError: in ListOffsetArray64 attempting to get 4, index out of range
(https://github.com/scikit-hep/awkward-1.0/blob/1.8.0/src/cpu-kernels/awkward_NumpyArray_getitem_next_at.cpp#L21) That's what should happen - xarray delegates the indexing to the underlying array, which throws an error if there is a problem. Arithmetic also seems to work In [23]: da * 2
Out[23]:
<xarray.DataArray (x: 2, t: 4)>
<Array [[2, 4, 6, 200], [8, 10, 12]] type='2 * var * int64'>
Dimensions without coordinates: x, t But we hit snags with numpy functions In [24]: np.mean(da)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [24], in <cell line: 1>()
----> 1 np.mean(da)
File <__array_function__ internals>:180, in mean(*args, **kwargs)
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/numpy/core/fromnumeric.py:3430, in mean(a, axis, dtype, out, keepdims, where)
3428 pass
3429 else:
-> 3430 return mean(axis=axis, dtype=dtype, out=out, **kwargs)
3432 return _methods._mean(a, axis=axis, dtype=dtype,
3433 out=out, **kwargs)
File ~/Documents/Work/Code/xarray/xarray/core/_reductions.py:1478, in DataArrayReductions.mean(self, dim, skipna, keep_attrs, **kwargs)
1403 def mean(
1404 self,
1405 dim: None | Hashable | Sequence[Hashable] = None,
(...)
1409 **kwargs: Any,
1410 ) -> DataArray:
1411 """
1412 Reduce this DataArray's data by applying ``mean`` along some dimension(s).
1413
(...)
1476 array(nan)
1477 """
-> 1478 return self.reduce(
1479 duck_array_ops.mean,
1480 dim=dim,
1481 skipna=skipna,
1482 keep_attrs=keep_attrs,
1483 **kwargs,
1484 )
File ~/Documents/Work/Code/xarray/xarray/core/dataarray.py:2930, in DataArray.reduce(self, func, dim, axis, keep_attrs, keepdims, **kwargs)
2887 def reduce(
2888 self: T_DataArray,
2889 func: Callable[..., Any],
(...)
2895 **kwargs: Any,
2896 ) -> T_DataArray:
2897 """Reduce this array by applying `func` along some dimension(s).
2898
2899 Parameters
(...)
2927 summarized data and the indicated dimension(s) removed.
2928 """
-> 2930 var = self.variable.reduce(func, dim, axis, keep_attrs, keepdims, **kwargs)
2931 return self._replace_maybe_drop_dims(var)
File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1854, in Variable.reduce(self, func, dim, axis, keep_attrs, keepdims, **kwargs)
1852 data = func(self.data, axis=axis, **kwargs)
1853 else:
-> 1854 data = func(self.data, **kwargs)
1856 if getattr(data, "shape", ()) == self.shape:
1857 dims = self.dims
File ~/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:579, in mean(array, axis, skipna, **kwargs)
577 return _to_pytimedelta(mean_timedeltas, unit="us") + offset
578 else:
--> 579 return _mean(array, axis=axis, skipna=skipna, **kwargs)
File ~/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:341, in _create_nan_agg_method.<locals>.f(values, axis, skipna, **kwargs)
339 with warnings.catch_warnings():
340 warnings.filterwarnings("ignore", "All-NaN slice encountered")
--> 341 return func(values, axis=axis, **kwargs)
342 except AttributeError:
343 if not is_duck_dask_array(values):
File <__array_function__ internals>:180, in mean(*args, **kwargs)
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/highlevel.py:1434, in Array.__array_function__(self, func, types, args, kwargs)
1417 def __array_function__(self, func, types, args, kwargs):
1418 """
1419 Intercepts attempts to pass this Array to those NumPy functions other
1420 than universal functions that have an Awkward equivalent.
(...)
1432 See also #__array_ufunc__.
1433 """
-> 1434 return ak._connect._numpy.array_function(func, types, args, kwargs)
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/_connect/_numpy.py:43, in array_function(func, types, args, kwargs)
41 return out
42 else:
---> 43 return function(*args, **kwargs)
TypeError: mean() got an unexpected keyword argument 'dtype' This seems fixable though. In fact I think if we changed #6845 (@dcherian) then this alternative would already work In [25]: import awkward as ak
In [26]: ak.mean(da)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [26], in <cell line: 1>()
----> 1 ak.mean(da)
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/operations/reducers.py:971, in mean(x, weight, axis, keepdims, mask_identity)
969 with np.errstate(invalid="ignore"):
970 if weight is None:
--> 971 sumw = count(x, axis=axis, keepdims=keepdims, mask_identity=mask_identity)
972 sumwx = sum(x, axis=axis, keepdims=keepdims, mask_identity=mask_identity)
973 else:
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/operations/reducers.py:79, in count(array, axis, keepdims, mask_identity)
10 def count(array, axis=None, keepdims=False, mask_identity=False):
11 """
12 Args:
13 array: Data in which to count elements.
(...)
77 to turn the None values into something that would be counted.
78 """
---> 79 layout = ak.operations.convert.to_layout(
80 array, allow_record=False, allow_other=False
81 )
82 if axis is None:
84 def reduce(xs):
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/operations/convert.py:1917, in to_layout(array, allow_record, allow_other, numpytype)
1914 return from_iter([array], highlevel=False)
1916 elif isinstance(array, Iterable):
-> 1917 return from_iter(array, highlevel=False)
1919 elif not allow_other:
1920 raise TypeError(
1921 f"{array} cannot be converted into an Awkward Array"
1922 + ak._util.exception_suffix(__file__)
1923 )
File ~/miniconda3/envs/hummingbirds/lib/python3.10/site-packages/awkward/operations/convert.py:891, in from_iter(iterable, highlevel, behavior, allow_record, initial, resize)
889 out = ak.layout.ArrayBuilder(initial=initial, resize=resize)
890 for x in iterable:
--> 891 out.fromiter(x)
892 layout = out.snapshot()
893 return ak._util.maybe_wrap(layout, behavior, highlevel)
ValueError: cannot convert <xarray.DataArray ()>
array(1) (type DataArray) to an array element
(https://github.com/scikit-hep/awkward-1.0/blob/1.8.0/src/python/content.cpp#L974) Suggestion: How about awkward offer a specialized array class which uses the same fast code underneath but disallows Union types, and follows the array API standard, implementing Am I missing anything here? @jpivarski tl;dr We probably could support awkward arrays, at least instances where all values have the same dtype. |
Hi! I will be looking deeply into this when I get back from traveling (next week). Just to let you know that I saw this and I'm interested. Thanks! |
You mentioned union arrays, but for completeness, the type system in Awkward Array has
You're interested in a subset of this type system, but that subset doesn't just exclude unions, it also excludes records. If you have an xarray, you don't need top-level records since those could just be the columns of an xarray, but some data source might provide records nested within variable-length lists (very common in HEP) or other nesting. It would have to be explicitly excluded. That leaves the possibility of missing lists and missing numeric primitives. Missing lists could be turned into empty lists (Google projects like Protocol Buffers often make that equivalence) and missing numbers could be turned into NaN if you're willing to lose integer-ness. Here's a way to determine if an import awkward._v2 as ak
import numpy as np
def prepare(layout, continuation, **kwargs):
if layout.is_RecordType:
raise NotImplementedError("no records!")
elif layout.is_UnionType:
if len(layout) == 0 or np.all(layout.tags) == layout.tags[0]:
return layout.project(layout.tags[0]).recursively_apply(prepare)
else:
raise NotImplementedError("no non-trivial unions!")
elif layout.is_OptionType:
next = continuation() # fully recurse
content_type = next.content.form.type
if isinstance(content_type, ak.types.NumpyType):
return ak.fill_none(next, np.nan, axis=0, highlevel=False)
elif isinstance(content_type, ak.types.ListType):
return ak.fill_none(next, [], axis=0, highlevel=False)
elif isinstance(content_type, ak.types.RegularType):
return ak.fill_none(next.toListOffsetArray64(False), [], axis=0, highlevel=False)
else:
raise AssertionError(f"what? {content_type}")
ak.Array(array.layout.recursively_apply(prepare), behavior=array.behavior) It should catch all the cases and doesn't rely on string-processing the type's DataShape representation. Given that you're working within that subset, it would be possible to define Oh, if you're replacing variable-length dimensions with the maximum length in that dimension, what about actually padding the array with ak.pad_none? ak.fill_none(ak.pad_none(array, ak.max(ak.num(array))), np.nan) The above would have to be expanded to get every |
That's very helpful, thank you!
(FWIW I find the "behavior" stuff very confusing in general, even after reading the docs page on it. I don't really understand why I can't just reimplement my monkey-patched example above by subclassing
How would I do this without monkey-patching? All I really want (and I hazard all that most xarray users want) is to be able to import some class from
What's the benefit of doing this over just using
I can see that this might be useful in xarray's This is exciting though @jpivarski ! |
The passing on of behavior is just to not break applications that depend on it. I did that just for correctness. Monkey-patching will add the desired properties to the Ragged array is not a specialized subset of types within Awkward Array. There are More later... |
Given that you have an array of only list-type, regular-type, and numpy-type (which the def shape_dtype(layout, lateral_context, **kwargs):
if layout.is_RegularType:
lateral_context["shape"].append(layout.size)
elif layout.is_ListType:
max_size = ak.max(ak.num(layout))
lateral_context["shape"].append(max_size)
elif layout.is_NumpyType:
lateral_context["dtype"] = layout.dtype
else:
raise AssertionError(f"what? {layout.form.type}")
context = {"shape": [len(array)]}
array.layout.recursively_apply(
shape_dtype, lateral_context=context, return_array=False
)
# check context for "shape" and "dtype" Here's the application on an array of mixed regular and irregular lists: >>> array = ak.to_regular(ak.Array([[[[1, 2, 3], []]], [[[4], [5]], [[6, 7], [8]]]]), axis=2)
>>> print(array.type)
2 * var * 2 * var * int64
>>> context = {"shape": [len(array)]}
>>> array.layout.recursively_apply(
... shape_dtype, lateral_context=context, return_array=False
... )
>>> context
{'shape': [2, 2, 2, 3], 'dtype': dtype('int64')} (This To answer your question about monkey-patching, I think it would be best to make a wrapper. You don't want to give all Here's a start of a wrapper: class RaggedArray:
def __init__(self, array_like):
layout = ak.to_layout(array_like, allow_record=False, allow_other=False)
behavior = None
if isinstance(array_like, ak.Array):
behavior = array_like.behavior
self._array = ak.Array(layout.recursively_apply(prepare), behavior=behavior)
context = {"shape": [len(self._array)]}
self._array.layout.recursively_apply(
shape_dtype, lateral_context=context, return_array=False
)
self._shape = context["shape"]
self._dtype = context["dtype"]
def __repr__(self):
# this is pretty cheesy
return "<Ragged" + repr(self._array)[1:]
@property
def dtype(self):
return self._dtype
@property
def shape(self):
return self._shape
def __getitem__(self, where):
if isinstance(where, RaggedArray):
where = where._array
if isinstance(where, tuple):
where = tuple(x._array if isinstance(x, RaggedArray) else x for x in where)
out = self._array[where]
if isinstance(out, ak.Array):
return RaggedArray(out)
else:
return out
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
inputs = [x._array if isinstance(x, RaggedArray) else x for x in inputs]
out = self._array.__array_ufunc__(ufunc, method, *inputs, **kwargs)
return RaggedArray(out)
def sum(self, axis=None, keepdims=False, mask_identity=False):
out = ak.sum(self._array, axis=axis, keepdims=keepdims, mask_identity=mask_identity)
if isinstance(out, ak.Array):
return RaggedArray(out)
else:
return out It keeps an Thus, it can act as a gatekeeper of what kinds of operations are allowed: I meant to say something earlier about why we go for full generality in types: it's because some of the things we want to do, such as ak.cartesian, require more complex types, and as soon as one function needs it, the whole space needs to be enlarged. For the first year of Awkward Array use, most users wanted it for plain ragged arrays (based on their bug-reports and questions), but after about a year, they were asking about missing values and records, too, because you eventually need them unless you intend to work within a narrow set of functions. Union arrays are still not widely used, but they can come from some file formats. Some GeoJSON files that I looked at had longitude, latitude points in different list depths because some were points and some were polygons, disambiguated by a string label. That's not good to work with (we can't handle that in Numba, for instance), but if you select all points with some slice, put them in one array, and select all polygons with another slice, putting them in their own array, these each become trivial unions, and that's why I added the squashing of trivial unions to the |
Thanks for the huge response there @jpivarski !
This is an important point which I meant to ask about earlier. We need a
If you want a
That makes sense. And if you subclassed then I guess you would also need to change those Thanks for the wrapping example! I think there is a bug with your In [1]: from ragged import RaggedArray
In [2]: ra = RaggedArray([[1, 2, 3], [4, 5]])
In [3]: ra.ndim
Out[3]: 1
In [4]: ra.shape
Out[4]: [3] (I expected I would really like to try testing the |
It shouldn't be a subclass because it doesn't satisfy a substitution principle: Since
Oh....... I hadn't been thinking that RaggedArray is something we'd put in the general Awkward Array library. I was thinking of it only as a way to define "the subset of Awkward Arrays that xarray uses," which would live in xarray. I don't want to introduce another level of type-specificity to the system, since that would make things harder to understand. (Imagine reading the docs and it says, "You can apply this function to ak.Array, but not to ak.RaggedArray." Or "this is an ak.Array that happens to be ragged, but not a ak.RaggedArray.") So let me rethink your original idea of adding If that point is negotiable, I could introduce an
Or maybe the best way to present it is with a Anyway, you can see why I'm loath to add a property to ak.Array that's just named " But if I'm providing it as an extra function, or as a trio of properties named So in the end, I just came back to where we started: xarray would own the RaggedArray wrapper. Or it could be a third package, as awkward-pandas is to awkward and pandas.
No, I initialized it incorrectly: it should have started as context = {"shape": [len(array)]} and then recurse from there. My previous example also had the wrong output, but I didn't count square brackets carefully enough to have caught it. (By the way, not copying the context is why it's called "lateral"; if a copied dict is needed, it's "depth_context". I just went back and checked: yes, they're being handled appropriately.) I fixed the code that I wrote in the comments above for posterity. |
I see, makes sense.
Oh I was just thinking if we're building a new class that is tightly coupled to
I don't think it's within scope of xarray to offer a numpy-like array class in our main library - we don't do this for any other case!
However we could definitely have a separate
Yeah that wouldn't be ideal. (Digression: From my perspective part of the problem is that merely generalising numpy arrays to be ragged would have been useful for lots of people, but
That's very interesting. I'm not immediately sure which of those would be best for xarray wrapping - I think it's plausible that we could eventually support any of those options... ((3) through the issues Deepak linked to (#5168, #2801).)
Thanks for fixing that, and for all the explanations! |
I am tempted to suggest that the right way to handle Awkward array is to treat "var" dimensions similar to NumPy's structured dtypes, with Either way, I would definitely encourage figuring out some actual use-cases before building this out :) |
Getting non-ints out of dask, which is one of our favorite duck arrays, uses |
Also on the digression, I just want to clarify where we're coming from, why we did the things we did.
I can see how minimal extensions of the NumPy array model to include ragged arrays represent the majority of use-cases, though it wouldn't have been enough for our first use-case in particle physics, which looks roughly like this (with made-up numbers): collision_events = ak.Array([
{
"event": 0,
"primary_vertex": {"x": 0.001, "y": -0.002, "z": 0.3},
"electrons": [
{"px": 1.1, "py": 2.2, "pz": 3.3, "E": 4.4, "EoverP": 1.07},
{"px": 1.0, "py": 3.2, "pz": 3.4, "E": 4.5, "EoverP": 0.93},
],
"muons": [
{"px": 0.1, "py": 2.3, "pz": 4.3, "E": 5.4, "isolation": 0},
{"px": 1.1, "py": 2.2, "pz": 3.3, "E": 4.4, "isolation": 0.9},
{"px": 1.1, "py": 2.2, "pz": 3.3, "E": 4.4, "isolation": 0.8},
],
},
{
"event": 1,
"primary_vertex": {"x": -0.001, "y": 0.002, "z": 0.4},
"electrons": [
{"px": 1.0, "py": 3.2, "pz": 3.4, "E": 4.5, "EoverP": 0.93},
],
"muons": [],
},
...,
]) We needed "records with differently typed fields" and "variable-length lists" to be nestable within each other. It's even sometimes the case that one of the inner records representing a particle has another variable-length list within it, identifying the indexes of particles in the collision event that it's close to. We deliberated on whether those cross-links should allow the structure to be non-tree-like, either a DAG or to actually have cycles (scikit-hep/awkward#178). The prior art is a C++ infrastructure that does have a full graph model: collision events represented as arbitrary C++ class instances, and those arbitrary C++ data are serialized to disk in exabytes of ROOT files. Our first problem was to get a high-performance representation of these data in Python. For that, we didn't need missing data or heterogeneous unions ( Another consideration is that this scope exactly matches Apache Arrow (including the lack of cross-references). As such, we can use Arrow as an interchange format and Parquet as a disk format without having to exclude a subspace of types in either direction. We don't use Arrow as an internal format for performance reasons—we have node types that are lazier than Arrow's so they're better as intermediate arrays in a multi-step calculation—but it's important to have one-to-one, minimal computation (sometimes zero-copy) transformations to and from Arrow. That said, as we've been looking for use-cases beyond particle physics, most of them would be handled well by simple ragged arrays. Also, we've found the "just ragged arrays" part of Arrow to be the most developed or at least the first to be developed, driven by SQL-like applications. Our unit tests in Awkward Array have revealed a lot of unhandled cases in Arrow, particularly the Parquet serialization, that we report in JIRA (and they quickly get resolved). Two possible conclusions:
If it turns out that conclusion (1) is right or more right than (2), then at least a subset of what we're working on is going to be useful to the wider community. If it's (2), though, then it's a big opportunity. |
Very interesting @jpivarski - that would make a good blog post / think piece if you ever felt like it.
I'm biased in thinking that (1) is true, but then I'm not a particle physicist - the closest I came was using ROOT in undergrad extremely briefly 😄 .
Now seems like a good time to list some potential use cases for a
NOAA's Global Drifter Program tracks the movement of floating buoys, each of which takes measurements at specified time intervals as it moves along. As each drifter may take a completely different path across the ocean, the length of their trajectories is variable. @dhruvbalwada pointed me to this notebook which compares analyzing drifter data using
Reading the notebook it seems that a new option (4) of ragged data within xarray might well be the best of both worlds for this particular use case. @selipot @philippemiron is creating a
Allele data can have a wide variation in the number of alt alleles (most variants will have one, but a few could have thousands), as mentioned by @tomwhite in https://github.com/pystatgen/sgkit/issues/634. I'm not sure whether the I'm also unclear if this would be useful for ANNData scverse/anndata#744 (cc @ivirshup)
Scipp is an xarray-like labelled data structure for neutron scattering experiment data. On their FAQ Q titled "Why is xarray not enough", one of the things they quote is
Would a
A "Record" is for when you want to store multiple pieces of information (of possibly different types) about an "event". In Whilst I don't think we can store awkward arrays containing Records directly in xarray (though after @shoyer's comment I'm not so sure...), what we could do is have multiple named data variables, each of which contains a As an example of a quirky use case for record-like data, a biologist friend recently showed me a dataset of hummingbird feeding patterns. He had strapped RFID tags to hundreds of hummingbirds, then set up feeder stations equipped with radio antennae. When the birds came to feed an event would be recorded. As the resulting data varied with bird ID, date, and feeder, but each individual bird could visit any particular feeder any number of times on a given day, I thought he could store this data in a Ragged array within xarray with the dimension representing number of visits having variable length. There are probably a lot more possible use cases for a |
This is a wonderful list; thank you!
I believe that this use-case benefits from being able to mix regular and ragged dimensions, that the data have 3 regular dimensions and 1 ragged dimension, with the ragged one as the innermost. (The RaggedArray described above has this feature.)
I might have been misinterpreting the word "sparse data" in conversations about this. I had thought that "sparse data" is logically rectilinear but represented in memory with the zeros removed, so the internal machinery has to deal with irregular structures, but the outward API it presents is regular (dimensionality is completely described by a
is definitely what we mean by a ragged array (again with the ragged dimension potentially within zero or more regular dimensions). |
Anecdotal evidence that this is indeed not a good solution: scipp's "ragged data" implementation was originally implemented with such a variable-length dimension support. This led to a whole series of problems, including significantly complicating |
Partially, but the bigger challenge may be the related algorithms, e.g., for getting data into this layout, and for switching to other ragged layouts. For context, one of the main reasons for our data layout is the ability to make cuts/slices quickly. We frequently deal with 2-D, 3-D, and 4-D data. For example, a 3-D case may be be the momentum transfer A naive solution could be to simply work with something like Scipp's ragged data can be considered a "partial sorting", to build a sort of "index". Based on all this we can then, e.g., quickly compute high-resolution cuts. Say we are in 3-D (Qx, Qy, Qz). We would not have bin sizes that match the final resolution required by the science. Instead we could use 50x50x50 bins. Then we can very quickly produce a high-res 2-D plot (say (1000x1000), Qx, Qz or whatever), since our binned data format reduces the data/memory you have to load and consider by a factor of up to 50 (in this example). |
@danielballan mentioned that the photon community (synchrotrons/X-ray scattering) is starting to talk more and more about ragged data related to "event mode" data collection as well. |
Is anyone here going to EuroScipy (two weeks from now) and interested in having a chat/discussion about ragged data? |
You are right that "sparse" is misleading. Since it is indeed most commonly used for sparse matrix/array representations we are now usually avoiding this term (and refer to it as binned data, or ragged data instead). Obviously our title page needs an update 😬 .
This does actually apply to Scipp's binned data. A |
Similarly to @TomNicholas' oceanography observation data example, I was thinking about trajectories or more generally any collection of geospatial "features". An alternative approach to "ragged" arrays (i.e., arrays of lists as described by @SimonHeybrock) would be to have regular arrays with opaque objects as elements. For example, pygeos (now merged in shapely 2.0) provides vectorized operations (numpy ufuncs) for arrays of This approach is more specific than "ragged" arrays (it heavily relies on the tools provided for dealing with the opaque objects) and in some cases it might not be ideal, e.g., when the data collected for the geospatial features is not "geometry invariant" (like for the trajectories of buoys floating and drifting in the oceans). But perhaps both approach may co-exist in a single xarray Dataset? (e.g., feature geometry as coordinates of shapely object arrays and feature data as ragged arrays, if necessary).
Sounds like a good use case for Xarray explicit indexes to support some of Scipp's features. I can imagine a "ragged" array coordinate coupled with a custom Xarray index to perform data selection (slicing) and alignment along the "regular" dimension(s) of the ragged array. |
Just adding another use-case for this discussion, Argo float data. These are oceanographic instruments that vertically profile the ocean, and the length of each profile changes: |
cc @ivirshup @joshmoore (who may be interested in this as well) |
Definitely interested and interestedly watching 🍿
A question from the perspective of Zarr (v3), does it make sense to think of this potential More generally, ❤️ for all of this, and interested to see how I/we can possibly help. |
Hi All, Thank you for the detailed discussion and thank you @TomNicholas for pointing it out to me. I read the thread last week and have been digesting it. There are many details that go over my head and will keep re-reading them to develop a better understanding of the problem. Two weeks ago I started working part-time on CloudDrift. This is an NSF EarthCube-funded project led by @selipot. @philippemiron was the lead developer in the first year of the project and he laid the foundation of the data structure that we need and example notebooks. The project's purpose is to make working with Lagrangian data (primarily ocean but generalizable to other kinds) easier for scientists who consume such data while also optimizing the storage of such data. This is use case 1 in Tom's list of use cases here. Clouddrift currently provides an implementation of a Other goals of the project include providing example and tutorial notebooks, writing adapters for canonical ocean Lagrangian datasets, writing methods for oceanographic diagnostics, and more general developer/scientist advocacy kind of work. I am very much interested in making our What do you think should be the next step? Should we plan a video call to explore options? |
This sounds good to me! To represent a (Same for I'm in favor of a video call meeting to discuss this. In general, I'm busiest on U.S. mornings, on Wednesday and Thursday, but perhaps you can send a when2meet or equivalent poll? One thing that could be discussed in writing (maybe more easily) is what data types you would consider in scope for That is,
You don't want record-types or union-types, so the only questions are how to implement (2) and whether you want (3) and (4). Including a type, such as missing data, allows for more function return values but obliges you to consider that type for all function arguments. You'll want to choose carefully how you close your system. (Maybe this block of details can be copied to an issue where you're doing the development of |
A possibly relevant distinction that had not occurred to me previously is the example by @milancurcic: If I understand this correctly then this type of data is essentially an array of variable-length time-series (essentially a list of lists?), i.e., there is an order within each inner list. This is conceptually different from the data I am typically dealing with, where each inner list is a list of records without specific ordering. |
That sounds extremely exciting @milancurcic ! Someone dedicated who wants to make a widely-useful tool is exactly what is needed. I think there are many technical questions (and tbh I didn't really follow a lot of the details of your last comment @jpivarski), but the answers to those will likely depend on intended use cases. I'm happy to attend a video call to discuss this, and think that organising one with people interested in ragged arrays and xarray across disciplines would be a sensible next step. (You should also advertise such a meeting on the pangeo discourse - we could start a new pangeo working group like this if it goes well.) |
Also note the Ragged Array Summit on Scientific Python. |
Everyone who is interested in this, but particularly @milancurcic, please fill out this poll: https://www.when2meet.com/?17481732-uGwNn and we'll meet by Zoom (URL to be distributed later) to talk about RaggedArray. I'll pick a best time from these responses on Monday. Thanks! |
@milancurcic, @joshmoore, and I are all available on Thursday, November 3 at 11am U.S. Central (12pm U.S. Eastern/Florida, 5pm Central European/Germany: note the unusual U.S.-Europe difference this week, 16:00 UTC). Let's meet then! I sent a Google calendar invitation to both of you at that time, which contains a Zoom URL. If anyone else is interested, let me know and I'll send you the Zoom URL as well (just not on a public GitHub comment). |
I should be able to join today as well @jpivarski ! Will need the zoom address |
Send me an email address, and I'll send you the Zoom URL. The email that you have listed here: http://tom-nicholas.com/contact/ doesn't work (bounced back). |
Oops - use thomas dot nicholas at columbia dot edu please! |
Follow up on this topic at scikit-hep/ragged#6 |
Just curious if anyone here has thoughts on this.
For more context: Awkward is like numpy but for arrays of very arbitrary (dynamic) structure.
I don't know much yet about that library (I've just seen this SciPy 2020 presentation), but now I could imagine using xarray for dealing with labelled collections of geometrical / geospatial objects like polylines or polygons.
At this stage, any integration between xarray and awkward arrays would be something highly experimental, but I think this might be an interesting case for flexible arrays (and possibly flexible indexes) mentioned in the roadmap. There is some discussion here: scikit-hep/awkward#27.
Does anyone see any other potential use case?
cc @pydata/xarray
The text was updated successfully, but these errors were encountered: