-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: allow using open_arrow with PyCapsule protocol (without pyarrow dependency) #349
ENH: allow using open_arrow with PyCapsule protocol (without pyarrow dependency) #349
Conversation
pyogrio/_io.pyx
Outdated
import pyarrow as pa | ||
reader = pa.RecordBatchStreamReader._import_from_c(stream_ptr) | ||
if return_capsule: | ||
reader = capsule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To simplify the return objects, can we always return the bare capsule from here?
Then in the Python code if the user wants a pyarrow object, you could call pa.RecordBatchStreamReader._import_from_c(stream_ptr)
? Or with pyarrow 15 pass to your new from_stream
constructor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that would be nicer, but that requires a minimum of pyarrow 14.0 to be able to use RecordBatchReader._import_from_c_capsule
, because in python space we can't easily get the pointer out of the capsule. So therefore left that code path as is for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although for testing purposes, we have been using ctypes.pythonapi.PyCapsule_..
methods, so maybe that way we can actually do that in Python. The other problem is that we have to somehow keep the capsule alive (pyarrow doesn't allow to attach it to the RecordBatchReader), and just having it as a variable in ogr_open_arrow
before the yield
didn't seem to be sufficient (although for the ogr_dataset
it is, not sure if cython handles python objects differently ..)
In #206 (comment) there was some discussion of whether to return a Overall this is exciting! I'm looking forward to using this from geoarrow-rs without a pyarrow dependency! (I'll make a helper that calls On the topic of removing dependencies, with arrow it would be feasible to also make numpy an optional dependency, right? Though I would certainly understand if you don't want the maintenance overhead of that. |
I think the main question is whether we use the numpy C API, because in that case it's probably more complicated to have it optional. We do have a |
I was thinking the opposite... is it true that importing the C API only requires numpy as a build dependency? Or does that also require it as a runtime dependency? Numpy is only used in the non-arrow readers, right? In theory you could split the non-arrow and arrow-based implementations into two separate files where only the non-arrow implementation imports numpy? |
I would think that if you build with the numpy C API, then the shared library has numpy symbols it will not find when numpy is not available at runtime. I am not sure in C/C++ you can have "optional" dependencies at runtime, rather than at build time? (either your library is built with a certain dependency, or it is not?) |
My guess had been that those symbols would be looked up when code was called that needed, but I really don't know. |
pyogrio/raw.py
Outdated
@@ -349,10 +359,11 @@ def open_arrow( | |||
sql_dialect=None, | |||
return_fids=False, | |||
batch_size=65_536, | |||
return_pyarrow=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts on return_pyarrow
vs return_capsule
? I'm not sure which is clearer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't return a capsule exactly, but a small wrapper object with the __arrow_c_stream__
method that will return the capsule.
Side question: would returning the capsule directly be more useful for you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(although it would be nice to have keyword name that is "positive" about what you want, instead of indicating of not wanting pyarrow .. I just don't have good ideas for that at the moment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(although it would be nice to have keyword name that is "positive" about what you want, instead of indicating of not wanting pyarrow .. I just don't have good ideas for that at the moment)
I think this is more along the lines of what I was trying to ask.
I think having a wrapper object is preferable than raw capsules
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could maybe also consider removing the option entirely, and always returns the capsule wrapper object. This API is still quite new and more an advanced API, so we might be OK with still changing it.
Taking the example from the docstring:
with open_arrow(path) as source:
meta, reader = source
for table in reader:
....
If we would just always return the wrapper object as implemented in the PR right now, that means that people using this right now and expecting a pyarrow RecordBachReader (or an iterable of pyarrow Tables), would need to change their code to:
with open_arrow(path) as source:
meta, reader = source
reader = pa.RecordBatchReader.from_stream(reader)
for table in reader:
....
One problem with this however is that this requires pyarrow >= 15.0, so that's not ideal.
We could also add the basic methods of a RecordBatchreader to the wrapper object (__enter__
, __iter__
, read_next_batch
, read_all
, schema
, etc, just calling the equivalent method of the underlying pyarrow.RecordBatchReader, which we can instantiate lazily only when one of those methods is used, i.e. which still allows you to convert it with __arrow_c_stream__
without requiring pyarrow)
That keeps compatibility for people using the Python APIs of a RecordBatchReader (like in the code snippet above, to iterate through the stream). However, that unfortunately still doesn't work when passing this reader object to some other functionality of pyarrow that can accept a reader but passes that through to C++ code (eg writing a dataset with a RecordBatchReader as input)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also just check if pyarrow is available, if so return a RecordBatchReader (which will also have the __arrow_c_stream__
method for recent pyarrow), and if not fallback on returning the wrapper object.
That keeps backwards compatibility perfectly, while allowing to use it without pyarrow (at the expense of a slightly confusing API / different behaviour just depending on whether another package is installed or not)
Although that might also be an extra indirection in case you don't need the pyarrow RecordBatchReader. Because what will be returned from __arrow_c_stream__
on the returned reader will be a ArrowArrayStream backed by Arrow wrapping the GDAL stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the extra indirection would be a problem.
I tend to think that having multiple possible return types is much more confusion than it's worth for most users. I'm not really a fan of having it depend only on if pyarrow is installed.
If we're going to have one function, having a return_pyarrow=True
I think is the best option. It's probably too much duplication here, but otherwise I would've considered an open_arrow_pycapsule_interface()
or similar function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also just check if pyarrow is available, if so return a RecordBatchReader (which will also have the
__arrow_c_stream__
method for recent pyarrow), and if not fallback on returning the wrapper object.
Silently returning a different type depending on installed Python packages is generally error-prone for users. I would recommend avoiding this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good points, agreed that it's better to not do any "smart" return value depending on what is installed.
I updated the PR (and added some docs) for the current state of having a return_pyarrow
keyword with a default of True.
I am personally also still fine with changing the default to False (so you have to ask explicitly for a pyarrow object (to avoid the one line to do it yourself), which makes the keyword also about what you want, not what you want to avoid)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The keyword mentioned in the changelog is use_pyarrow
(which I think I prefer) but the implementation is return_pyarrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @jorisvandenbossche !
A few minor comments, and this looks good to go (esp. since it might just resolve our periodic segfault as per #386 where I tested with capsule changes from here).
It seems difficult to fully prove that the capsule works in the absence of pyarrow (e.g., that the stream can be consumed by a different ArrowStream reader), but the tests here at least prove that it operates without an import error triggered if pyarrow is missing.
pyogrio/_io.pyx
Outdated
if return_pyarrow: | ||
import pyarrow as pa | ||
stream_ptr = <uintptr_t> stream | ||
reader = pa.RecordBatchStreamReader._import_from_c(stream_ptr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe simplify a bit and remove line 1439?
reader = pa.RecordBatchStreamReader._import_from_c(stream_ptr) | |
reader = pa.RecordBatchStreamReader._import_from_c(<uintptr_t> stream) |
pyogrio/raw.py
Outdated
@@ -349,10 +359,11 @@ def open_arrow( | |||
sql_dialect=None, | |||
return_fids=False, | |||
batch_size=65_536, | |||
return_pyarrow=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The keyword mentioned in the changelog is use_pyarrow
(which I think I prefer) but the implementation is return_pyarrow
It looks like you can use nanoarrow's python bindings for that, and the |
Indeed; that appears to work, thanks for the tip. I'm assuming we don't yet want to add a CI environment with nanoarrow but without pyarrow to test this, but as we reduce dependency on pyarrow that would be something to consider in the future. |
For now you can probably just assert that the returned object is not an instance of any known pyarrow class and assert that it works with |
Co-authored-by: Brendan Ward <bcward@astutespruce.com>
OK, updated this, and changed I think I have a slight preference to actually make |
How do you see that working out downstream, e.g., in GeoPandas? I'm assuming we'd still need to use |
Any pyarrow-based downstream library can convert the stream to a pyarrow by passing it to a |
Yes, it's like the docstring example I added: with open_arrow(path, use_pyarrow=False) as source:
meta, stream = source
# this is the extra line you need with pyarrow=False
reader = pa.RecordBatchReader.from_stream(stream)
for table in reader:
geometries = shapely.from_wkb(table[meta["geometry_name"] or "wkb_geometry"]) And of course, you can still set And indeed this is only about |
Ok - |
I'm personally OK with this if we document that line of code. Presumably only relatively advanced users will be using No strong preference either way. |
Thanks @jorisvandenbossche ! |
I think in one of the issues/PRs related to adding the Arrow read support we already touched on this topic (about returning a pointer instead of a pyarrow object, so you can also use this without a pyarrow dependency), but don't directly find it anymore.
But now that Arrow has standardized the Arrow PyCapsule protocol for exposing those pointers in Python with capsules, I think it makes sense to reconsider this (apache/arrow#39195, https://arrow.apache.org/docs/dev/format/CDataInterface/PyCapsuleInterface.html)
cc @kylebarron
Some notes:
open_arrow
, I added ause_pyarrow
keyword with the current default of True, which you can then set to False to allow using this without pyarrow_ArrowStream
that just has the protocol's dunder method for now. In theory, we could make this a bit more full-featured class, to make it more similar as pyarrow's RecordBatchReader (eg making it iterable), but not sure who would actually use that(meta, stream)
tuple, although in this case it would be sufficient to return a single class (as we control the class and could attach the meta to the class). But maybe better to keep the return value consistent now?