[Python] Promote usage of the Arrow PyCapsule Protocol (for the C Data Inteface) #39195
Description
#37797 added a Python layer for working with the C Data Interface through capsules and defined dunder methods, described at https://arrow.apache.org/docs/dev/format/CDataInterface/PyCapsuleInterface.html.
Currently, libraries are using pyarrow's _export_to_c
/_import_from_c
semi-private methods to get the C Data Interface structs / construct pyarrow object from the structs.
With the formalized PyCapsule protocol, the idea is that this communication of the structs now happens with defined dunder methods __arrow_c_schema__
/ __arrow_c_array__
/ __arrow_c_stream__
to get the data as a C Data Interface struct wrapped into a capsule instead as raw integer (export). In addition, the pyarrow constructors (pa.array(..)
, pa.table(..)
, ..) will now also accept any object implementing the relevant dunder method (import).
This new Python-level interface has the benefits to be 1) more robust (not passing around pointers as integers, the capsule will ensure the data is released when an error would happen in the middle), and 2) not tied to pyarrow the library. This means that other libraries can also start accepting any Arrow-compatible data structure, instead of harcoding support for pyarrow and using pyarrow's _export_to_c
.
We want to promote other libraries to start supporting this protocol as well, which can mean:
- Add the dunders on their own objects where appropriate (so that other libraries will recognize your data structures as holding Arrow-compatible data)
- Where ingesting data (eg where there might be specific pyarrow support right now), recognize objects that implement the dunder methods
Linking some issues here: pola-rs/polars#12530, data-apis/dataframe-api#279, pandas-dev/pandas#56587, OSGeo/gdal#9043 and OSGeo/gdal#9132
In addition, there are also some steps we can take within the Arrow project itself to further promote this:
- ADBC support: feat(python/adbc_driver_manager): export handles and ingest data through python Arrow PyCapsule interface arrow-adbc#1346
- nanoarrow support: feat(python/adbc_driver_manager): export handles and ingest data through python Arrow PyCapsule interface arrow-adbc#1346
- [Python][Docs] Document the PyCapsule protocol in the "extending pyarrow" section of the Python docs #39196
- [Python][Java][Docs] Update the "Integrating PyArrow with Java" documentation section to not use
_import_from_c
/_export_to_c
#44872 - [Python][R][Docs] Update the "Integrating PyArrow with R" documentation section to not use
_import_from_c
/_export_to_c
#39198
There are also some other follow-up discussions about certain aspects of the protocol or APIs we provide in pyarrow:
- [Python] RecordBatchReader constructor from stream object implementing the PyCapsule Protocol #39217
- [Python] Public API to consume objects supporting the PyCapsule Arrow C Data Interface #38010
- [Python] Arrow PyCapsule Protocol: standard way to get the schema of a "data" (array of stream) object? #39689