-
Notifications
You must be signed in to change notification settings - Fork 3.9k
ARROW-88: [C++] Refactor usages of parquet_cpp namespace #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
+1, LGTM |
|
Thanks -- merging so you can rebase |
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Aug 30, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Aug 30, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Aug 30, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Aug 30, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Sep 4, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
wesm
added a commit
to wesm/arrow
that referenced
this pull request
Sep 8, 2018
…#include scrubbing This is the completion of work I started in PARQUET-442. This also resolves PARQUET-277 as no boost headers are included in the public API anymore. I've done some scrubbing of #includes using Google's Clang-based include-what-you-use tool. PARQUET-522 can also be resolved when this is merged. Author: Wes McKinney <wes@cloudera.com> Closes apache#49 from wesm/PARQUET-446 and squashes the following commits: e805a0c [Wes McKinney] Use int64_t for scanner batch sizes 503b1c1 [Wes McKinney] Fix mixed-up include guard names 4c02d2b [Wes McKinney] Refactor monolithic encodings/encodings.h 9e28fc3 [Wes McKinney] Finished IWYU path. Some imported impala code left unchanged for now 6d4af8e [Wes McKinney] Some initial IWYU 5be40d6 [Wes McKinney] Remove outdated TODO 2e39062 [Wes McKinney] Remove any boost #include dependencies 07059ca [Wes McKinney] Remove serialized-page.* files, move serialized-page-test to parquet/file 9458b36 [Wes McKinney] Add more headers to parquet.h public API b4b0412 [Wes McKinney] Remove Thrift compiled headers from public API and general use outside of deserialized-related internal headers and code paths. Add unit test to enforce this Change-Id: Ib12b600cf6318e9da514b39fd8db2225bd6c9e09
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Sep 10, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
praveenbingo
added a commit
to praveenbingo/arrow
that referenced
this pull request
Sep 10, 2018
Loading Gandiva dynamically in java bindings. Packaging the dynamic library and byte code files in Gandiva JAR. Introduced configuration object to customize Gandiva at runtime.
kou
pushed a commit
that referenced
this pull request
May 10, 2020
This PR enables tests for `ARROW_COMPUTE`, `ARROW_DATASET`, `ARROW_FILESYSTEM`, `ARROW_HDFS`, `ARROW_ORC`, and `ARROW_IPC` (default on). #7131 enabled a minimal set of tests as a starting point. I confirmed that these tests pass locally with the current master. In the current TravisCI environment, we cannot see this result due to a lot of error messages in `arrow-utility-test`. ``` $ git log | head -1 commit ed5f534 % ctest ... Start 1: arrow-array-test 1/51 Test #1: arrow-array-test ..................... Passed 4.62 sec Start 2: arrow-buffer-test 2/51 Test #2: arrow-buffer-test .................... Passed 0.14 sec Start 3: arrow-extension-type-test 3/51 Test #3: arrow-extension-type-test ............ Passed 0.12 sec Start 4: arrow-misc-test 4/51 Test #4: arrow-misc-test ...................... Passed 0.14 sec Start 5: arrow-public-api-test 5/51 Test #5: arrow-public-api-test ................ Passed 0.12 sec Start 6: arrow-scalar-test 6/51 Test #6: arrow-scalar-test .................... Passed 0.13 sec Start 7: arrow-type-test 7/51 Test #7: arrow-type-test ...................... Passed 0.14 sec Start 8: arrow-table-test 8/51 Test #8: arrow-table-test ..................... Passed 0.13 sec Start 9: arrow-tensor-test 9/51 Test #9: arrow-tensor-test .................... Passed 0.13 sec Start 10: arrow-sparse-tensor-test 10/51 Test #10: arrow-sparse-tensor-test ............. Passed 0.16 sec Start 11: arrow-stl-test 11/51 Test #11: arrow-stl-test ....................... Passed 0.12 sec Start 12: arrow-concatenate-test 12/51 Test #12: arrow-concatenate-test ............... Passed 0.53 sec Start 13: arrow-diff-test 13/51 Test #13: arrow-diff-test ...................... Passed 1.45 sec Start 14: arrow-c-bridge-test 14/51 Test #14: arrow-c-bridge-test .................. Passed 0.18 sec Start 15: arrow-io-buffered-test 15/51 Test #15: arrow-io-buffered-test ............... Passed 0.20 sec Start 16: arrow-io-compressed-test 16/51 Test #16: arrow-io-compressed-test ............. Passed 3.48 sec Start 17: arrow-io-file-test 17/51 Test #17: arrow-io-file-test ................... Passed 0.74 sec Start 18: arrow-io-hdfs-test 18/51 Test #18: arrow-io-hdfs-test ................... Passed 0.12 sec Start 19: arrow-io-memory-test 19/51 Test #19: arrow-io-memory-test ................. Passed 2.77 sec Start 20: arrow-utility-test 20/51 Test #20: arrow-utility-test ...................***Failed 5.65 sec Start 21: arrow-threading-utility-test 21/51 Test #21: arrow-threading-utility-test ......... Passed 1.34 sec Start 22: arrow-compute-compute-test 22/51 Test #22: arrow-compute-compute-test ........... Passed 0.13 sec Start 23: arrow-compute-boolean-test 23/51 Test #23: arrow-compute-boolean-test ........... Passed 0.15 sec Start 24: arrow-compute-cast-test 24/51 Test #24: arrow-compute-cast-test .............. Passed 0.22 sec Start 25: arrow-compute-hash-test 25/51 Test #25: arrow-compute-hash-test .............. Passed 2.61 sec Start 26: arrow-compute-isin-test 26/51 Test #26: arrow-compute-isin-test .............. Passed 0.81 sec Start 27: arrow-compute-match-test 27/51 Test #27: arrow-compute-match-test ............. Passed 0.40 sec Start 28: arrow-compute-sort-to-indices-test 28/51 Test #28: arrow-compute-sort-to-indices-test ... Passed 3.33 sec Start 29: arrow-compute-nth-to-indices-test 29/51 Test #29: arrow-compute-nth-to-indices-test .... Passed 1.51 sec Start 30: arrow-compute-util-internal-test 30/51 Test #30: arrow-compute-util-internal-test ..... Passed 0.13 sec Start 31: arrow-compute-add-test 31/51 Test #31: arrow-compute-add-test ............... Passed 0.12 sec Start 32: arrow-compute-aggregate-test 32/51 Test #32: arrow-compute-aggregate-test ......... Passed 14.70 sec Start 33: arrow-compute-compare-test 33/51 Test #33: arrow-compute-compare-test ........... Passed 7.96 sec Start 34: arrow-compute-take-test 34/51 Test #34: arrow-compute-take-test .............. Passed 4.80 sec Start 35: arrow-compute-filter-test 35/51 Test #35: arrow-compute-filter-test ............ Passed 8.23 sec Start 36: arrow-dataset-dataset-test 36/51 Test #36: arrow-dataset-dataset-test ........... Passed 0.25 sec Start 37: arrow-dataset-discovery-test 37/51 Test #37: arrow-dataset-discovery-test ......... Passed 0.13 sec Start 38: arrow-dataset-file-ipc-test 38/51 Test #38: arrow-dataset-file-ipc-test .......... Passed 0.21 sec Start 39: arrow-dataset-file-test 39/51 Test #39: arrow-dataset-file-test .............. Passed 0.12 sec Start 40: arrow-dataset-filter-test 40/51 Test #40: arrow-dataset-filter-test ............ Passed 0.16 sec Start 41: arrow-dataset-partition-test 41/51 Test #41: arrow-dataset-partition-test ......... Passed 0.13 sec Start 42: arrow-dataset-scanner-test 42/51 Test #42: arrow-dataset-scanner-test ........... Passed 0.20 sec Start 43: arrow-filesystem-test 43/51 Test #43: arrow-filesystem-test ................ Passed 1.62 sec Start 44: arrow-hdfs-test 44/51 Test #44: arrow-hdfs-test ...................... Passed 0.13 sec Start 45: arrow-feather-test 45/51 Test #45: arrow-feather-test ................... Passed 0.91 sec Start 46: arrow-ipc-read-write-test 46/51 Test #46: arrow-ipc-read-write-test ............ Passed 5.77 sec Start 47: arrow-ipc-json-simple-test 47/51 Test #47: arrow-ipc-json-simple-test ........... Passed 0.16 sec Start 48: arrow-ipc-json-test 48/51 Test #48: arrow-ipc-json-test .................. Passed 0.27 sec Start 49: arrow-json-integration-test 49/51 Test #49: arrow-json-integration-test .......... Passed 0.13 sec Start 50: arrow-json-test 50/51 Test #50: arrow-json-test ...................... Passed 0.26 sec Start 51: arrow-orc-adapter-test 51/51 Test #51: arrow-orc-adapter-test ............... Passed 1.92 sec 98% tests passed, 1 tests failed out of 51 Label Time Summary: arrow-tests = 27.38 sec (27 tests) arrow_compute = 45.11 sec (14 tests) arrow_dataset = 1.21 sec (7 tests) arrow_ipc = 6.20 sec (3 tests) unittest = 79.91 sec (51 tests) Total Test time (real) = 79.99 sec The following tests FAILED: 20 - arrow-utility-test (Failed) Errors while running CTest ``` Closes #7142 from kiszk/ARROW-8754 Authored-by: Kazuaki Ishizaki <ishizaki@jp.ibm.com> Signed-off-by: Sutou Kouhei <kou@clear-code.com>
jikunshang
pushed a commit
to jikunshang/arrow
that referenced
this pull request
May 29, 2020
Modify VectorLoader to support native compression
zhouyuan
added a commit
to zhouyuan/arrow
that referenced
this pull request
Nov 29, 2021
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
zhztheplayer
pushed a commit
to zhztheplayer/arrow-1
that referenced
this pull request
Feb 8, 2022
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
zhztheplayer
pushed a commit
to zhztheplayer/arrow-1
that referenced
this pull request
Mar 3, 2022
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
rui-mo
pushed a commit
to rui-mo/arrow-1
that referenced
this pull request
Mar 23, 2022
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
GerHobbelt
pushed a commit
to GerHobbelt/arrow
that referenced
this pull request
Oct 18, 2024
GerHobbelt
pushed a commit
to GerHobbelt/arrow
that referenced
this pull request
Dec 14, 2024
paddyroddy
pushed a commit
to rok/arrow
that referenced
this pull request
Jul 19, 2025
rok
added a commit
to rok/arrow
that referenced
this pull request
Jul 24, 2025
* Initial commit * init project * complete most of the annotations * fix FixedSizeBufferWriter init annotation * bump 10.0.1.2 * complete parquet core annotations * bump 10.0.1.3 * re-export modules * fix: add return type for foreign_buffer * fix output_stream and read_message annotations * ci: add release job * pre-commit specify flake8 version to 5.0.4 * flake8 ignore F821 for private files * optimize annotations * bump 10.0.1.4 * if param supports IOBase, it should also support NativeFile * bump 10.0.1.5 * pre-commit adds mypy lint * bump 10.0.1.6 * fix ci name * Remove version restrictions for Python. * release 10.0.1.7 * update poetry ci * Fix stubs for Table factory methods The main problem was that these were annotated as instance methods rather than static/class methods, but I've added some detail, too. * update pre-commit * update * fix: make fs.FileSystem.from_uri and hdfs.HadoopFileSystem.from_uri as classmethod * fix: fix read_metadata and read_schema wrong annotations (#11) * fix: typo S3FileSystem schema -> scheme (#12) * bump version 10.0.1.8 (#13) * . (#16) * make DataType hashable (#22) * pa.table support recordbatch (#20) * RecordBatchStreamReader supports next (#18) * add RecordBatch.to_pylist (#23) * precise return types for to_pandas (#25) * bump version 10.0.1.9 (#26) * [pre-commit.ci] pre-commit autoupdate (#27) * [pre-commit.ci] pre-commit autoupdate (#28) * Fix types in FlightDescriptor class (#29) * Fix types in FlightDescriptor class * Add argument types * chore: update pre-commit config (#30) * build: use `pixi` to manage project (#31) * chore: add taplo config (#32) * chore: update LICENSE date (#33) * doc: add CODE_OF_CONDUCT.md (#34) * [pre-commit.ci] pre-commit autoupdate (#38) * [pre-commit.ci] pre-commit autoupdate (#39) updates: - [github.com/astral-sh/ruff-pre-commit: v0.5.7 → v0.6.1](astral-sh/ruff-pre-commit@v0.5.7...v0.6.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#48) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.1 → v0.6.2](astral-sh/ruff-pre-commit@v0.6.1...v0.6.2) - [github.com/pre-commit/mirrors-mypy: v1.11.1 → v1.11.2](pre-commit/mirrors-mypy@v1.11.1...v1.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * refactor: rewrite type annotations by hand. (#35) * chore: restart * update ruff config * build: add extra dependencies * update mypy config * feat: add util.pyi * feat: add types.pyi * feat: impl lib.pyi * update * feat: add acero.pyi * feat: add compute.pyi * add benchmark.pyi * add cffi * feat: add csv.pyi * disable isort single line * reformat * update compute.pyi * add _auzurefs.pyi * add _cuda.pyi * add _dataset.pyi * rename _stub_typing.pyi -> _stubs_typing.pyi * add _dataset_orc.pyi * add pyarrow-stubs/_dataset_parquet_encryption.pyi * add _dataset_parquet.pyi * add _feather.pyi * feat: add _flight.pyi * add _fs.pyi * add _gcsfs.pyi * add _hdfs.pyi * add _json.pyi * add _orc.pyi * add _parquet_encryption.pyi * add _parquet.pyi * update * add _parquet.pyi * add _s3fs.pyi * add _substrait.pyi * update * update * add parquet/core.pyi * add parquet/encryption.pyi * add BufferProtocol * impl _filesystemdataset_write * add dataset.pyi * add feather.pyi * add flight.pyi * add fs.pyi * add gandiva.pyi * add json.pyi * add orc.pyi * add pandas_compat.pyi * add substrait.pyi * update util.pyi * add interchange * add __lib_pxi * update __lib_pxi * update * update * add types.pyi * feat: add scalar.pyi * update types.pyi * update types.pyi * update scalar.pyi * update * update * update * update * update * update * feat: impl array * feat: add builder.pyi * add scipy * add tensor.pyi * feat: impl NativeFile * update io.pyi * complete io.pyi * add ipc.pyi * mv benchmark.pyi into __lib_pxi * add table.pyi * do re-export in lib.pyi * fix io.pyi * update * optimize scalar.pyi * optimize indices * complete ipc.pyi * update * fix NullableIterable * fix string array * ignore overload-overlap error * fix _Tabular.__getitem__ * remove additional_dependencies * remove check-mypy.sh (apache#49) * release 20240828 (apache#50) * fix release tag (apache#51) * ci: install hatch by pip (apache#52) * ci: fix hatch keyring (apache#53) * ci: use Release environment (apache#54) * remove Scalar generic type var _IsValid (apache#56) * remove Scalar generic type var _IsValid * make Array, Scalar, Types generic type var as covariant type (apache#57) * remove Field generic type var _Nullable (apache#58) * remove Field generic type var _Nullable * fix: pa.dictionary and pa.schema annotation (apache#59) * fix pa.dictionary annotation * fix: schema annotation * release new version (apache#60) * [pre-commit.ci] pre-commit autoupdate (apache#62) * release: 2024.9.3 (apache#63) use new date release format %Y.%m.%d * support pyarrow compute funcs (apache#61) * update compute.pyi * impl Aggregation funcs * impl arithmetic * imit bit-wise functions * imit rounding functions * optimize annotation * impl logarithmic functions * update * impl comparisons funcs * impl logical funcs * impl string predicates and transforms * impl string padding * impl string trimming * impl string splitting and component extraction * impl string joining and slicing * impl Containment tests * impl Categorizations * impl Structural transforms * impl Conversions * impl Temporal component extraction * impl random, Timezone handling * impl Array-wise functions * fix timestamp scalar * support build array with list of scalar (apache#64) * release 2024.9.4 (apache#65) * Version follows the version of pyarrow (apache#66) * import parquet.core into parquet __init__.py (apache#67) Update __init__.pyi * release 17.1 (apache#69) * fix: add missing submodule benchmark, csv and cuda (apache#71) * release 17.2 (apache#72) * fix: from_pylist covariance (apache#73) * [pre-commit.ci] pre-commit autoupdate (apache#74) * Fix return type for middleware factory's start_call (apache#75) It can return None if middleware is not needed for a given call. * release 17.3 (apache#76) * fix: add missing return type in FlightDescriptor static methods (apache#80) * Support Tabular filter with Expression (apache#81) support Tabular filter with Expression * Support compute functions to accept Expression as parameter (apache#82) * fix: Fix the return value of Expression comparison (apache#83) * release 17.4 (apache#84) * fix: fix the array return type (apache#89) * a few type improvements, mostly flight related (apache#90) * FlightError.extra_info -> bytes * annotate FlightStreamReader.cancel return * BasicAuth serialize/deserialize * RecordBatchFileReader.schema * actually str | bytes * add_type_to_Field (apache#87) * add_type_to_Field * Field.type should return the covariant DataType --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * Support fsspec.AbstractFileSystem (apache#88) * supported_filesystem * fixes * remove unused import --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * release 17.5 (apache#91) * [pre-commit.ci] pre-commit autoupdate (apache#95) * fix: parquet not accepting NativeFile (apache#98) * feat: support pa.Buffer buffer protocol (apache#99) * feat: Support `compute` functions to accept ChunkedArray. (apache#100) * release 17.6 (apache#101) * [pre-commit.ci] pre-commit autoupdate (apache#102) * working towards making return signatures only have one type (mean and exp) (apache#105) * group_by_returns_TableGroupBy * return_single_type_for_mean_exp * revert table.pyi * compute.mean does not support BinaryScalar or BinaryArray --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * a table group_by was returing Self but should return TableGroupBy (apache#104) group_by_returns_TableGroupBy * [pre-commit.ci] pre-commit autoupdate (apache#106) updates: - [github.com/pre-commit/pre-commit-hooks: v4.6.0 → v5.0.0](pre-commit/pre-commit-hooks@v4.6.0...v5.0.0) - [github.com/astral-sh/ruff-pre-commit: v0.6.7 → v0.6.9](astral-sh/ruff-pre-commit@v0.6.7...v0.6.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: RecordBatch missing `from_arrays` and `from_pandas` (apache#108) * release 17.7 (apache#109) * fix_combine_chunks (apache#110) * make Self backward compatible (apache#115) * fix: update ConvertOptions (apache#114) * add type property to Array (apache#112) * add type property to Array * Array.type should return covariant --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * release 17.8 (apache#117) * Add include_columns parameter in ConvertOptions (apache#118) * add list[str] overload to rename_columns (apache#119) * release 17.9 (apache#120) * [pre-commit.ci] pre-commit autoupdate (apache#124) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.9 → v0.7.0](astral-sh/ruff-pre-commit@v0.6.9...v0.7.0) - [github.com/pre-commit/mirrors-mypy: v1.11.2 → v1.12.1](pre-commit/mirrors-mypy@v1.11.2...v1.12.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve type annotations for parquet writer (apache#125) Add support for per-field compression specification Add missing none compression value. * Add missing return type for Schema.serialize (apache#123) * Add `Schema.field(int)` (apache#122) * Change various io related functions to support `StrPath` as a path input (apache#121) * Change various io related functions to support StrPath as a path input * fmt * Added StrPath | IO for feather types * fix type hint for sort_by (apache#130) sort_by takes str or list[tuple(name, order)] as its argument where str is a field name not a sort order * metadata on a schema can be passed as str (apache#128) For details see https://github.com/apache/arrow/blob/apache-arrow-17.0.0/python/pyarrow/types.pxi\#L2053-L2056 * Correct typevars for DictionaryType, MapType, RunEncodedType (apache#126) Correct type hints for Dictionary, RunEndEncoded and Map Signed-off-by: Jonas Dedden <university@jonas-dedden.de> Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * Add some more StrPath io parts that were overlooked. (apache#131) * Add some more StrPath io parts that were overlooked. Additionally, add the utility typealias `SingleOrList` that can be used in places where we want a concise type declaration but the there is a large union of types. * write_dataset(base_dir = ) can also take Path * Support ChunkedArray in add/append methods in Table (apache#129) * Add missing partitioning typing case (apache#132) This should now support the examples in the docstring for partitioning. * fix: typo 'permissive' instead of 'premissive' (apache#133) * release 17.10 (apache#134) * fix incorrect type hints for compute.sort_indices (apache#135) * disallow passing `names` as an argument to table when using dictionaries (apache#137) * [pre-commit.ci] pre-commit autoupdate (apache#138) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.0 → v0.7.1](astral-sh/ruff-pre-commit@v0.7.0...v0.7.1) - [github.com/pre-commit/mirrors-mypy: v1.12.1 → v1.13.0](pre-commit/mirrors-mypy@v1.12.1...v1.13.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add missing type for FlightEndpoint (apache#136) * release 17.11 (apache#139) * [pre-commit.ci] pre-commit autoupdate (apache#140) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.1 → v0.7.2](astral-sh/ruff-pre-commit@v0.7.1...v0.7.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#142) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.2 → v0.7.3](astral-sh/ruff-pre-commit@v0.7.2...v0.7.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * chore: Create FUNDING.yml (apache#143) Create FUNDING.yml * fix: `read_schema` should return Schema (apache#145) fix: read_schema should return Schema * release 17.12 (apache#146) * [pre-commit.ci] pre-commit autoupdate (apache#147) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.3 → v0.7.4](astral-sh/ruff-pre-commit@v0.7.3...v0.7.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: `to_table` argument `columns` can be a dict of expressions (apache#149) * [pre-commit.ci] pre-commit autoupdate (apache#148) * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.4 → v0.8.1](astral-sh/ruff-pre-commit@v0.7.4...v0.8.1) * ruff: ignore PYI063 --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * release 17.13 (apache#151) * fix: FileSystem metadata value should be str (apache#152) * fix: FileSystemHandler metadata value should be str (apache#153) * [pre-commit.ci] pre-commit autoupdate (apache#154) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.1 → v0.8.2](astral-sh/ruff-pre-commit@v0.8.1...v0.8.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve coverage for pyarrow.struct typehint (apache#157) * fix: ipc typing (apache#159) * release 17.14 (apache#160) * fix: add missing param 'nbytes' to NativeFile.read (apache#163) * release 17.15 (apache#164) * [pre-commit.ci] pre-commit autoupdate (apache#161) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.2 → v0.8.3](astral-sh/ruff-pre-commit@v0.8.2...v0.8.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add 'None' as a valid argument for partitioning to the various parquet reading functions (apache#166) * [pre-commit.ci] pre-commit autoupdate (apache#165) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.3 → v0.8.6](astral-sh/ruff-pre-commit@v0.8.3...v0.8.6) - [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](pre-commit/mirrors-mypy@v1.13.0...v1.14.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: should use Collection[Array] instead list[Array] (apache#170) "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance Consider using "Sequence" instead, which is covariant * fix: update type hints for path_or_paths and source parameters in ParquetDataset and read_table (apache#171) * [pre-commit.ci] pre-commit autoupdate (apache#167) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.6 → v0.9.1](astral-sh/ruff-pre-commit@v0.8.6...v0.9.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 17.16 (apache#172) * Fixed pa.fixed_shape_tensor (apache#175) * [pre-commit.ci] pre-commit autoupdate (apache#173) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.1 → v0.9.4](astral-sh/ruff-pre-commit@v0.9.1...v0.9.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Preserve generic in `ChunkedArray.type` (apache#177) * release 17.17 (apache#178) * [pre-commit.ci] pre-commit autoupdate (apache#176) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.4 → v0.9.6](astral-sh/ruff-pre-commit@v0.9.4...v0.9.6) - [github.com/pre-commit/mirrors-mypy: v1.14.1 → v1.15.0](pre-commit/mirrors-mypy@v1.14.1...v1.15.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: support to construct ListArray with primitive type (apache#179) * fix: Avoid `chunked_array` overlapping overloads (apache#183) * fix: Add placeholder annotations to `pc.if_else` (apache#182) * fix: Widen `Array` to `Array | ChunkedArray` (apache#181) * fix: add `pc.fill_null` (apache#185) - https://arrow.apache.org/docs/python/generated/pyarrow.compute.fill_null.html - https://github.com/narwhals-dev/narwhals/blob/05e47b27ebe27b24196cee5956d07748d65a62ee/narwhals/_arrow/series.py#L675 * fix: Allow Table.from_arrays to take a list containing a mix of Array and ChunkedArray (apache#187) Update table.pyi * release 17.18 (apache#188) * [pre-commit.ci] pre-commit autoupdate (apache#180) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.6 → v0.9.10](astral-sh/ruff-pre-commit@v0.9.6...v0.9.10) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: from_arrays for both Table and RecordBatch (apache#189) * fix: resolve some `pa.compute` overlaps (apache#184) * fix: resolve overlapping `compute.(add|divide)` * fix: copy from non-cloned signature * fix: resolve overlapping `compute.exp` * fix: resolve overlapping `compute.power` * fix: resolve overlapping `compute.equal` * fix: resolve overlapping `compute.and_` * fix: Include `Array` in `chunked_array` overload (apache#190) narwhals-dev/narwhals@0237f7a * release 17.19 (apache#191) * Add Scalar, Array and Type classes for Json & Uuid (apache#194) * Add Scalar, Array and Type classes for Json & Uuid * Formatting fixes * [pre-commit.ci] pre-commit autoupdate (apache#192) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.10 → v0.11.2](astral-sh/ruff-pre-commit@v0.9.10...v0.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add Scalar, Array and Type classes for Json & Uuid" (apache#195) Revert "Add Scalar, Array and Type classes for Json & Uuid (apache#194)" This reverts commit 8f77909. * fix: Add missing `pc.equal` overload (apache#196) * feat: support pyarrow 19.0 (apache#198) * build: upgrade pyarrow min version to 19.0 * feat: support pyarrow 19.0 * omit mypy bool8 override error * fix: reexport new types (apache#199) * feat: override new patterns for func repeat and nulls (apache#200) * fix: reexport decimal64 array and decimal128 array * feat: override new patterns for func `repeat` and `nulls` * release: 19.1 (apache#201) * fix: Allow `Iterable[Table]` in `concat_tables` (apache#203) https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html > tables : iterable of pyarrow.Table objects * fix: Allow `ChunkedArray[BooleanScalar]` in `pc.invert` (apache#204) Fixes https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L298-L299 * feat: Fully spec `TableGroupBy.aggregate` (apache#197) ## Related - https://arrow.apache.org/docs/python/compute.html#grouped-aggregations - https://arrow.apache.org/docs/python/generated/pyarrow.TableGroupBy.html#pyarrow.TableGroupBy.aggregate - https://github.com/apache/arrow/blob/34a984c842db42b409a1359e6e2cf167a2365a48/python/pyarrow/table.pxi#L6578-L6604 * fix: Add missing return type to `ChunkedArray.filter` (apache#205) * fix: Add relaxed final overload to logical functions (apache#206) Covers all of `pc.(and_ | and_kleene | and_not | and_not_kleene | or_ | or_kleene | xor)` Resolves: - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L219-L233 - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L662 * fix: Allow `ChunkedArray` in `Table.set_column` (apache#211) Also being more consistent with `ArrayOrChunkedArray[Any]` everywhere Discovered in - https://github.com/vega/vega-datasets/blob/343b7101391a81190ba24e1e8d62a381d2fef3bd/scripts/species.py#L798-L799 * chore: Ignore `fsspec` `[import-untyped]` (apache#210) ```py _fs.pyi:18: error: Skipping analyzing "fsspec": module is installed, but missing library stubs or py.typed marker [import-untyped] _fs.pyi:18: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports Found 1 error in 1 file (checked 64 source files) ``` - fsspec/filesystem_spec#625 - fsspec/filesystem_spec#1676 * feat: Convert `types.is_*` into `TypeIs` guards (apache#215) * chore: Add `types.__all__` * feat: Convert `types._is_*` into `TypeIs` guards I've been using this for a little while, but makes more sense to live in the stubs https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/utils.py#L44-L67 * fix: Resolve `bit_wise_and` overlaps (apache#214) Fixes 3 errors: ```py compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 4 and returns an incompatible type (reportOverlappingOverload) compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 5 and returns an incompatible type (reportOverlappingOverload) compute.pyi:620:5 - error: Overload 3 for "bit_wise_and" will never be used because its parameters overlap overload 1 (reportOverlappingOverload) ``` * fix: Resolve `list_*` overlapping overloads (apache#213) * fix: Resolve `list_value_length` overlaps * fix: Resolve `list_element` overlaps * fix: Resolve `list_(flatten|slice|parent_indices)` overlaps An improvement, but still not that accurate * fix: Include `VarianceOptions` in `TableGroupBy.aggregate` (apache#212) - Follow-up to apache#197 - Noticed while writing up (narwhals-dev/narwhals#2385) - We already use it for `std`, `var` in https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/group_by.py#L81-L82 * [pre-commit.ci] pre-commit autoupdate (apache#202) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.2 → v0.11.5](astral-sh/ruff-pre-commit@v0.11.2...v0.11.5) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Resolve `Scalar.as_py` warnings for `DictionaryType` (apache#207) > scalar.pyi:75:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) > scalar.pyi:85:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) Instead just using `int`, which should be all that is possible from: https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L154-L164 https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L63-L70 * fix: Add default to `pc.sort_indices` (apache#216) * fix: Add default to `pc.sort_indices` Fixes narwhals-dev/narwhals#2390 (comment) Default is specified in https://arrow.apache.org/docs/python/generated/pyarrow.compute.sort_indices.html * refactor: Reuse some aliases * fix: Allow `list_size` with `Field` in `pa.list_` (apache#218) Closes apache#217 * allow `Table` or `RecordBatch` for dataset (apache#222) allow source argument pyarrow.dataset.dataset() to be RecordBatch | Table * refactor: Simplify `types` overloads (apache#219) * fix: `binary` overlap * fix: Simplify list constructors, `_Ordered` * refactor: Use `_Tz` default * fix: iter ChunkedArray should return scalar value (apache#224) * release: 19.2 (apache#225) * fix: Add missing `DictionaryArray` methods/properties (apache#226) ## Docs - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.indices - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_decode - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_encode ## Fixes - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series.py#L787-L798 - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series_cat.py#L14-L18 * chore: use pyright as static type checker (apache#227) * use pyright as static type checker * make pyright happy * fix: fix pyright action (apache#229) fix github ci * fix: Match runtime behavior of `(Table|RecordBatch).select` (apache#221) * fix: Match runtime behavior of `(Table|RecordBatch).select` ## Resolves - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L305-L307 - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L285-L294 ##Description Following up on what I thought was a simple stub issue, but we're both *too strict* and *too permissive* in different ways ##Examples {placeholder} ##Related - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L4367-L4374 - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L1721-L1739 * update select * update select --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * [pre-commit.ci] pre-commit autoupdate (apache#220) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.5 → v0.11.8](astral-sh/ruff-pre-commit@v0.11.5...v0.11.8) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: narrow scalar when type is given (apache#230) * rename Uint -> UInt * feat: narrow scalar when type is given * release 19.3 (apache#231) * chore: pyright use strict mode (apache#233) * fix types * update array.pyi * update scalar.pyi * update * update array * update array * optimize chunked_array * optimizer iterchunks * update * update pyproject.toml * fix: pa.nulls accept type rather than types (apache#234) * [pre-commit.ci] pre-commit autoupdate (apache#232) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.8 → v0.11.9](astral-sh/ruff-pre-commit@v0.11.8...v0.11.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 19.4 (apache#235) * lint(pyright): disable reportUnknownMemberType (apache#239) * [pre-commit.ci] pre-commit autoupdate (apache#236) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.9 → v0.11.13](astral-sh/ruff-pre-commit@v0.11.9...v0.11.13) - [github.com/RobertCraigie/pyright-python: v1.1.400 → v1.1.401](RobertCraigie/pyright-python@v1.1.400...v1.1.401) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: support pyarrow 20.0 (apache#240) * [pre-commit.ci] pre-commit autoupdate (apache#241) updates: - [github.com/RobertCraigie/pyright-python: v1.1.401 → v1.1.402](RobertCraigie/pyright-python@v1.1.401...v1.1.402) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * support docstring (apache#242) * doc: complete tensor doc * doc: complete table doc * doc: complete scalar doc * doc: complete orc doc * doc: complete memory doc * doc: complete lib doc * doc: complete json doc * doc: complete hdfs doc * doc: complete gcsfs doc * doc: complete fs doc * doc: complete flight doc * doc: complete dataset doc * doc: complete dataset parquet doc * doc: complete dataset parquet encryption doc * doc: complete cuda doc * doc: complete csv doc * doc: complete azurefs doc * doc: complete core doc * doc: complete interchange doc * doc: complete array doc * doc: complete builder doc * doc: complete device doc * doc: complete io doc * doc: complete ipc doc * doc: complete types doc * mark deprecated apis * doc: complete _compute doc * doc: complete compute doc * doc: update compute doc * lint code * release 20.0.0.20250618 (apache#243) * fix: make ParquetFileFormat constructor args optional (apache#244) * fix: Field.remove_metadata should return Self (apache#246) * [pre-commit.ci] pre-commit autoupdate (apache#245) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.13 → v0.12.0](astral-sh/ruff-pre-commit@v0.11.13...v0.12.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250627 (apache#247) * fix: chunked_array with type should be specified (apache#250) * [pre-commit.ci] pre-commit autoupdate (apache#248) updates: - [github.com/astral-sh/ruff-pre-commit: v0.12.0 → v0.12.3](astral-sh/ruff-pre-commit@v0.12.0...v0.12.3) - [github.com/RobertCraigie/pyright-python: v1.1.402 → v1.1.403](RobertCraigie/pyright-python@v1.1.402...v1.1.403) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250715 (apache#251) * fix: The type parameter of array should be covariant (apache#253) * release 20.0.0.20250716 (apache#254) * Add py.typed file to signify that the library is typed See the relevant PEP https://peps.python.org/pep-0561 * Prepare `pyarrow-stubs` for history merging MINOR: [Python] Prepare `pyarrow-stubs` for history merging Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * Add `ty` configuration and suppress error codes * One line per rule * Add licence header from original repo for all `.pyi` files * Revert "Add licence header from original repo for all `.pyi` files" This reverts commit 1631f39. * Prepare for licence merging * Exclude `stubs` from `rat` test * Add Apache licence clause to `py.typed` * Reduce list * Resolve merge conflict --------- Signed-off-by: Jonas Dedden <university@jonas-dedden.de> Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> Co-authored-by: Jim Bosch <talljimbo@gmail.com> Co-authored-by: Oliver Mannion <125105+tekumara@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eugene Toder <eltoder@users.noreply.github.com> Co-authored-by: fvankrieken <fvankrieken@planning.nyc.gov> Co-authored-by: Ilia Ablamonov <ilia@flamefork.ru> Co-authored-by: Mathias Beguin <mathias.beguin@hotmail.com> Co-authored-by: Dylan Scott <dylan.scott@gmail.com> Co-authored-by: deanm0000 <37878412+deanm0000@users.noreply.github.com> Co-authored-by: Jan Moravec <moravecj@post.cz> Co-authored-by: Marius van Niekerk <marius.v.niekerk@gmail.com> Co-authored-by: Jonas Dedden <university@jonas-dedden.de> Co-authored-by: Fábio D. Batista <fabio@atelie.dev.br> Co-authored-by: ben-freist <93315290+ben-freist@users.noreply.github.com> Co-authored-by: Jiahao Yuan <kahojyun@icloud.com> Co-authored-by: Pim de Haan <pimdehaan@gmail.com> Co-authored-by: Dan Redding <125183946+dangotbanned@users.noreply.github.com> Co-authored-by: Tom Crasset <25140344+tcrasset@users.noreply.github.com> Co-authored-by: Tom McTiernan <tmct@users.noreply.github.com> Co-authored-by: Rok Mihevc <rok@mihevc.org>
rok
added a commit
to rok/arrow
that referenced
this pull request
Jul 24, 2025
* Initial commit * init project * complete most of the annotations * fix FixedSizeBufferWriter init annotation * bump 10.0.1.2 * complete parquet core annotations * bump 10.0.1.3 * re-export modules * fix: add return type for foreign_buffer * fix output_stream and read_message annotations * ci: add release job * pre-commit specify flake8 version to 5.0.4 * flake8 ignore F821 for private files * optimize annotations * bump 10.0.1.4 * if param supports IOBase, it should also support NativeFile * bump 10.0.1.5 * pre-commit adds mypy lint * bump 10.0.1.6 * fix ci name * Remove version restrictions for Python. * release 10.0.1.7 * update poetry ci * Fix stubs for Table factory methods The main problem was that these were annotated as instance methods rather than static/class methods, but I've added some detail, too. * update pre-commit * update * fix: make fs.FileSystem.from_uri and hdfs.HadoopFileSystem.from_uri as classmethod * fix: fix read_metadata and read_schema wrong annotations (#11) * fix: typo S3FileSystem schema -> scheme (#12) * bump version 10.0.1.8 (#13) * . (#16) * make DataType hashable (#22) * pa.table support recordbatch (#20) * RecordBatchStreamReader supports next (#18) * add RecordBatch.to_pylist (#23) * precise return types for to_pandas (#25) * bump version 10.0.1.9 (#26) * [pre-commit.ci] pre-commit autoupdate (#27) * [pre-commit.ci] pre-commit autoupdate (#28) * Fix types in FlightDescriptor class (#29) * Fix types in FlightDescriptor class * Add argument types * chore: update pre-commit config (#30) * build: use `pixi` to manage project (#31) * chore: add taplo config (#32) * chore: update LICENSE date (#33) * doc: add CODE_OF_CONDUCT.md (#34) * [pre-commit.ci] pre-commit autoupdate (#38) * [pre-commit.ci] pre-commit autoupdate (#39) updates: - [github.com/astral-sh/ruff-pre-commit: v0.5.7 → v0.6.1](astral-sh/ruff-pre-commit@v0.5.7...v0.6.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#48) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.1 → v0.6.2](astral-sh/ruff-pre-commit@v0.6.1...v0.6.2) - [github.com/pre-commit/mirrors-mypy: v1.11.1 → v1.11.2](pre-commit/mirrors-mypy@v1.11.1...v1.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * refactor: rewrite type annotations by hand. (#35) * chore: restart * update ruff config * build: add extra dependencies * update mypy config * feat: add util.pyi * feat: add types.pyi * feat: impl lib.pyi * update * feat: add acero.pyi * feat: add compute.pyi * add benchmark.pyi * add cffi * feat: add csv.pyi * disable isort single line * reformat * update compute.pyi * add _auzurefs.pyi * add _cuda.pyi * add _dataset.pyi * rename _stub_typing.pyi -> _stubs_typing.pyi * add _dataset_orc.pyi * add pyarrow-stubs/_dataset_parquet_encryption.pyi * add _dataset_parquet.pyi * add _feather.pyi * feat: add _flight.pyi * add _fs.pyi * add _gcsfs.pyi * add _hdfs.pyi * add _json.pyi * add _orc.pyi * add _parquet_encryption.pyi * add _parquet.pyi * update * add _parquet.pyi * add _s3fs.pyi * add _substrait.pyi * update * update * add parquet/core.pyi * add parquet/encryption.pyi * add BufferProtocol * impl _filesystemdataset_write * add dataset.pyi * add feather.pyi * add flight.pyi * add fs.pyi * add gandiva.pyi * add json.pyi * add orc.pyi * add pandas_compat.pyi * add substrait.pyi * update util.pyi * add interchange * add __lib_pxi * update __lib_pxi * update * update * add types.pyi * feat: add scalar.pyi * update types.pyi * update types.pyi * update scalar.pyi * update * update * update * update * update * update * feat: impl array * feat: add builder.pyi * add scipy * add tensor.pyi * feat: impl NativeFile * update io.pyi * complete io.pyi * add ipc.pyi * mv benchmark.pyi into __lib_pxi * add table.pyi * do re-export in lib.pyi * fix io.pyi * update * optimize scalar.pyi * optimize indices * complete ipc.pyi * update * fix NullableIterable * fix string array * ignore overload-overlap error * fix _Tabular.__getitem__ * remove additional_dependencies * remove check-mypy.sh (apache#49) * release 20240828 (apache#50) * fix release tag (apache#51) * ci: install hatch by pip (apache#52) * ci: fix hatch keyring (apache#53) * ci: use Release environment (apache#54) * remove Scalar generic type var _IsValid (apache#56) * remove Scalar generic type var _IsValid * make Array, Scalar, Types generic type var as covariant type (apache#57) * remove Field generic type var _Nullable (apache#58) * remove Field generic type var _Nullable * fix: pa.dictionary and pa.schema annotation (apache#59) * fix pa.dictionary annotation * fix: schema annotation * release new version (apache#60) * [pre-commit.ci] pre-commit autoupdate (apache#62) * release: 2024.9.3 (apache#63) use new date release format %Y.%m.%d * support pyarrow compute funcs (apache#61) * update compute.pyi * impl Aggregation funcs * impl arithmetic * imit bit-wise functions * imit rounding functions * optimize annotation * impl logarithmic functions * update * impl comparisons funcs * impl logical funcs * impl string predicates and transforms * impl string padding * impl string trimming * impl string splitting and component extraction * impl string joining and slicing * impl Containment tests * impl Categorizations * impl Structural transforms * impl Conversions * impl Temporal component extraction * impl random, Timezone handling * impl Array-wise functions * fix timestamp scalar * support build array with list of scalar (apache#64) * release 2024.9.4 (apache#65) * Version follows the version of pyarrow (apache#66) * import parquet.core into parquet __init__.py (apache#67) Update __init__.pyi * release 17.1 (apache#69) * fix: add missing submodule benchmark, csv and cuda (apache#71) * release 17.2 (apache#72) * fix: from_pylist covariance (apache#73) * [pre-commit.ci] pre-commit autoupdate (apache#74) * Fix return type for middleware factory's start_call (apache#75) It can return None if middleware is not needed for a given call. * release 17.3 (apache#76) * fix: add missing return type in FlightDescriptor static methods (apache#80) * Support Tabular filter with Expression (apache#81) support Tabular filter with Expression * Support compute functions to accept Expression as parameter (apache#82) * fix: Fix the return value of Expression comparison (apache#83) * release 17.4 (apache#84) * fix: fix the array return type (apache#89) * a few type improvements, mostly flight related (apache#90) * FlightError.extra_info -> bytes * annotate FlightStreamReader.cancel return * BasicAuth serialize/deserialize * RecordBatchFileReader.schema * actually str | bytes * add_type_to_Field (apache#87) * add_type_to_Field * Field.type should return the covariant DataType --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * Support fsspec.AbstractFileSystem (apache#88) * supported_filesystem * fixes * remove unused import --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * release 17.5 (apache#91) * [pre-commit.ci] pre-commit autoupdate (apache#95) * fix: parquet not accepting NativeFile (apache#98) * feat: support pa.Buffer buffer protocol (apache#99) * feat: Support `compute` functions to accept ChunkedArray. (apache#100) * release 17.6 (apache#101) * [pre-commit.ci] pre-commit autoupdate (apache#102) * working towards making return signatures only have one type (mean and exp) (apache#105) * group_by_returns_TableGroupBy * return_single_type_for_mean_exp * revert table.pyi * compute.mean does not support BinaryScalar or BinaryArray --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * a table group_by was returing Self but should return TableGroupBy (apache#104) group_by_returns_TableGroupBy * [pre-commit.ci] pre-commit autoupdate (apache#106) updates: - [github.com/pre-commit/pre-commit-hooks: v4.6.0 → v5.0.0](pre-commit/pre-commit-hooks@v4.6.0...v5.0.0) - [github.com/astral-sh/ruff-pre-commit: v0.6.7 → v0.6.9](astral-sh/ruff-pre-commit@v0.6.7...v0.6.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: RecordBatch missing `from_arrays` and `from_pandas` (apache#108) * release 17.7 (apache#109) * fix_combine_chunks (apache#110) * make Self backward compatible (apache#115) * fix: update ConvertOptions (apache#114) * add type property to Array (apache#112) * add type property to Array * Array.type should return covariant --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * release 17.8 (apache#117) * Add include_columns parameter in ConvertOptions (apache#118) * add list[str] overload to rename_columns (apache#119) * release 17.9 (apache#120) * [pre-commit.ci] pre-commit autoupdate (apache#124) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.9 → v0.7.0](astral-sh/ruff-pre-commit@v0.6.9...v0.7.0) - [github.com/pre-commit/mirrors-mypy: v1.11.2 → v1.12.1](pre-commit/mirrors-mypy@v1.11.2...v1.12.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve type annotations for parquet writer (apache#125) Add support for per-field compression specification Add missing none compression value. * Add missing return type for Schema.serialize (apache#123) * Add `Schema.field(int)` (apache#122) * Change various io related functions to support `StrPath` as a path input (apache#121) * Change various io related functions to support StrPath as a path input * fmt * Added StrPath | IO for feather types * fix type hint for sort_by (apache#130) sort_by takes str or list[tuple(name, order)] as its argument where str is a field name not a sort order * metadata on a schema can be passed as str (apache#128) For details see https://github.com/apache/arrow/blob/apache-arrow-17.0.0/python/pyarrow/types.pxi\#L2053-L2056 * Correct typevars for DictionaryType, MapType, RunEncodedType (apache#126) Correct type hints for Dictionary, RunEndEncoded and Map Signed-off-by: Jonas Dedden <university@jonas-dedden.de> Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * Add some more StrPath io parts that were overlooked. (apache#131) * Add some more StrPath io parts that were overlooked. Additionally, add the utility typealias `SingleOrList` that can be used in places where we want a concise type declaration but the there is a large union of types. * write_dataset(base_dir = ) can also take Path * Support ChunkedArray in add/append methods in Table (apache#129) * Add missing partitioning typing case (apache#132) This should now support the examples in the docstring for partitioning. * fix: typo 'permissive' instead of 'premissive' (apache#133) * release 17.10 (apache#134) * fix incorrect type hints for compute.sort_indices (apache#135) * disallow passing `names` as an argument to table when using dictionaries (apache#137) * [pre-commit.ci] pre-commit autoupdate (apache#138) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.0 → v0.7.1](astral-sh/ruff-pre-commit@v0.7.0...v0.7.1) - [github.com/pre-commit/mirrors-mypy: v1.12.1 → v1.13.0](pre-commit/mirrors-mypy@v1.12.1...v1.13.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add missing type for FlightEndpoint (apache#136) * release 17.11 (apache#139) * [pre-commit.ci] pre-commit autoupdate (apache#140) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.1 → v0.7.2](astral-sh/ruff-pre-commit@v0.7.1...v0.7.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#142) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.2 → v0.7.3](astral-sh/ruff-pre-commit@v0.7.2...v0.7.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * chore: Create FUNDING.yml (apache#143) Create FUNDING.yml * fix: `read_schema` should return Schema (apache#145) fix: read_schema should return Schema * release 17.12 (apache#146) * [pre-commit.ci] pre-commit autoupdate (apache#147) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.3 → v0.7.4](astral-sh/ruff-pre-commit@v0.7.3...v0.7.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: `to_table` argument `columns` can be a dict of expressions (apache#149) * [pre-commit.ci] pre-commit autoupdate (apache#148) * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.4 → v0.8.1](astral-sh/ruff-pre-commit@v0.7.4...v0.8.1) * ruff: ignore PYI063 --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * release 17.13 (apache#151) * fix: FileSystem metadata value should be str (apache#152) * fix: FileSystemHandler metadata value should be str (apache#153) * [pre-commit.ci] pre-commit autoupdate (apache#154) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.1 → v0.8.2](astral-sh/ruff-pre-commit@v0.8.1...v0.8.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve coverage for pyarrow.struct typehint (apache#157) * fix: ipc typing (apache#159) * release 17.14 (apache#160) * fix: add missing param 'nbytes' to NativeFile.read (apache#163) * release 17.15 (apache#164) * [pre-commit.ci] pre-commit autoupdate (apache#161) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.2 → v0.8.3](astral-sh/ruff-pre-commit@v0.8.2...v0.8.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add 'None' as a valid argument for partitioning to the various parquet reading functions (apache#166) * [pre-commit.ci] pre-commit autoupdate (apache#165) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.3 → v0.8.6](astral-sh/ruff-pre-commit@v0.8.3...v0.8.6) - [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](pre-commit/mirrors-mypy@v1.13.0...v1.14.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: should use Collection[Array] instead list[Array] (apache#170) "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance Consider using "Sequence" instead, which is covariant * fix: update type hints for path_or_paths and source parameters in ParquetDataset and read_table (apache#171) * [pre-commit.ci] pre-commit autoupdate (apache#167) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.6 → v0.9.1](astral-sh/ruff-pre-commit@v0.8.6...v0.9.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 17.16 (apache#172) * Fixed pa.fixed_shape_tensor (apache#175) * [pre-commit.ci] pre-commit autoupdate (apache#173) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.1 → v0.9.4](astral-sh/ruff-pre-commit@v0.9.1...v0.9.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Preserve generic in `ChunkedArray.type` (apache#177) * release 17.17 (apache#178) * [pre-commit.ci] pre-commit autoupdate (apache#176) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.4 → v0.9.6](astral-sh/ruff-pre-commit@v0.9.4...v0.9.6) - [github.com/pre-commit/mirrors-mypy: v1.14.1 → v1.15.0](pre-commit/mirrors-mypy@v1.14.1...v1.15.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: support to construct ListArray with primitive type (apache#179) * fix: Avoid `chunked_array` overlapping overloads (apache#183) * fix: Add placeholder annotations to `pc.if_else` (apache#182) * fix: Widen `Array` to `Array | ChunkedArray` (apache#181) * fix: add `pc.fill_null` (apache#185) - https://arrow.apache.org/docs/python/generated/pyarrow.compute.fill_null.html - https://github.com/narwhals-dev/narwhals/blob/05e47b27ebe27b24196cee5956d07748d65a62ee/narwhals/_arrow/series.py#L675 * fix: Allow Table.from_arrays to take a list containing a mix of Array and ChunkedArray (apache#187) Update table.pyi * release 17.18 (apache#188) * [pre-commit.ci] pre-commit autoupdate (apache#180) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.6 → v0.9.10](astral-sh/ruff-pre-commit@v0.9.6...v0.9.10) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: from_arrays for both Table and RecordBatch (apache#189) * fix: resolve some `pa.compute` overlaps (apache#184) * fix: resolve overlapping `compute.(add|divide)` * fix: copy from non-cloned signature * fix: resolve overlapping `compute.exp` * fix: resolve overlapping `compute.power` * fix: resolve overlapping `compute.equal` * fix: resolve overlapping `compute.and_` * fix: Include `Array` in `chunked_array` overload (apache#190) narwhals-dev/narwhals@0237f7a * release 17.19 (apache#191) * Add Scalar, Array and Type classes for Json & Uuid (apache#194) * Add Scalar, Array and Type classes for Json & Uuid * Formatting fixes * [pre-commit.ci] pre-commit autoupdate (apache#192) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.10 → v0.11.2](astral-sh/ruff-pre-commit@v0.9.10...v0.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add Scalar, Array and Type classes for Json & Uuid" (apache#195) Revert "Add Scalar, Array and Type classes for Json & Uuid (apache#194)" * fix: Add missing `pc.equal` overload (apache#196) * feat: support pyarrow 19.0 (apache#198) * build: upgrade pyarrow min version to 19.0 * feat: support pyarrow 19.0 * omit mypy bool8 override error * fix: reexport new types (apache#199) * feat: override new patterns for func repeat and nulls (apache#200) * fix: reexport decimal64 array and decimal128 array * feat: override new patterns for func `repeat` and `nulls` * release: 19.1 (apache#201) * fix: Allow `Iterable[Table]` in `concat_tables` (apache#203) https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html > tables : iterable of pyarrow.Table objects * fix: Allow `ChunkedArray[BooleanScalar]` in `pc.invert` (apache#204) Fixes https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L298-L299 * feat: Fully spec `TableGroupBy.aggregate` (apache#197) - https://arrow.apache.org/docs/python/compute.html#grouped-aggregations - https://arrow.apache.org/docs/python/generated/pyarrow.TableGroupBy.html#pyarrow.TableGroupBy.aggregate - https://github.com/apache/arrow/blob/34a984c842db42b409a1359e6e2cf167a2365a48/python/pyarrow/table.pxi#L6578-L6604 * fix: Add missing return type to `ChunkedArray.filter` (apache#205) * fix: Add relaxed final overload to logical functions (apache#206) Covers all of `pc.(and_ | and_kleene | and_not | and_not_kleene | or_ | or_kleene | xor)` Resolves: - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L219-L233 - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L662 * fix: Allow `ChunkedArray` in `Table.set_column` (apache#211) Also being more consistent with `ArrayOrChunkedArray[Any]` everywhere Discovered in - https://github.com/vega/vega-datasets/blob/343b7101391a81190ba24e1e8d62a381d2fef3bd/scripts/species.py#L798-L799 * chore: Ignore `fsspec` `[import-untyped]` (apache#210) ```py _fs.pyi:18: error: Skipping analyzing "fsspec": module is installed, but missing library stubs or py.typed marker [import-untyped] _fs.pyi:18: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports Found 1 error in 1 file (checked 64 source files) ``` - fsspec/filesystem_spec#625 - fsspec/filesystem_spec#1676 * feat: Convert `types.is_*` into `TypeIs` guards (apache#215) * chore: Add `types.__all__` * feat: Convert `types._is_*` into `TypeIs` guards I've been using this for a little while, but makes more sense to live in the stubs https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/utils.py#L44-L67 * fix: Resolve `bit_wise_and` overlaps (apache#214) Fixes 3 errors: ```py compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 4 and returns an incompatible type (reportOverlappingOverload) compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 5 and returns an incompatible type (reportOverlappingOverload) compute.pyi:620:5 - error: Overload 3 for "bit_wise_and" will never be used because its parameters overlap overload 1 (reportOverlappingOverload) ``` * fix: Resolve `list_*` overlapping overloads (apache#213) * fix: Resolve `list_value_length` overlaps * fix: Resolve `list_element` overlaps * fix: Resolve `list_(flatten|slice|parent_indices)` overlaps An improvement, but still not that accurate * fix: Include `VarianceOptions` in `TableGroupBy.aggregate` (apache#212) - Follow-up to apache#197 - Noticed while writing up (narwhals-dev/narwhals#2385) - We already use it for `std`, `var` in https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/group_by.py#L81-L82 * [pre-commit.ci] pre-commit autoupdate (apache#202) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.2 → v0.11.5](astral-sh/ruff-pre-commit@v0.11.2...v0.11.5) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Resolve `Scalar.as_py` warnings for `DictionaryType` (apache#207) > scalar.pyi:75:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) > scalar.pyi:85:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) Instead just using `int`, which should be all that is possible from: https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L154-L164 https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L63-L70 * fix: Add default to `pc.sort_indices` (apache#216) * fix: Add default to `pc.sort_indices` Fixes narwhals-dev/narwhals#2390 (comment) Default is specified in https://arrow.apache.org/docs/python/generated/pyarrow.compute.sort_indices.html * refactor: Reuse some aliases * fix: Allow `list_size` with `Field` in `pa.list_` (apache#218) Closes apache#217 * allow `Table` or `RecordBatch` for dataset (apache#222) allow source argument pyarrow.dataset.dataset() to be RecordBatch | Table * refactor: Simplify `types` overloads (apache#219) * fix: `binary` overlap * fix: Simplify list constructors, `_Ordered` * refactor: Use `_Tz` default * fix: iter ChunkedArray should return scalar value (apache#224) * release: 19.2 (apache#225) * fix: Add missing `DictionaryArray` methods/properties (apache#226) - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.indices - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_decode - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_encode - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series.py#L787-L798 - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series_cat.py#L14-L18 * chore: use pyright as static type checker (apache#227) * use pyright as static type checker * make pyright happy * fix: fix pyright action (apache#229) fix github ci * fix: Match runtime behavior of `(Table|RecordBatch).select` (apache#221) * fix: Match runtime behavior of `(Table|RecordBatch).select` - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L305-L307 - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L285-L294 Following up on what I thought was a simple stub issue, but we're both *too strict* and *too permissive* in different ways {placeholder} - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L4367-L4374 - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L1721-L1739 * update select * update select --------- Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * [pre-commit.ci] pre-commit autoupdate (apache#220) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.5 → v0.11.8](astral-sh/ruff-pre-commit@v0.11.5...v0.11.8) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: narrow scalar when type is given (apache#230) * rename Uint -> UInt * feat: narrow scalar when type is given * release 19.3 (apache#231) * chore: pyright use strict mode (apache#233) * fix types * update array.pyi * update scalar.pyi * update * update array * update array * optimize chunked_array * optimizer iterchunks * update * update pyproject.toml * fix: pa.nulls accept type rather than types (apache#234) * [pre-commit.ci] pre-commit autoupdate (apache#232) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.8 → v0.11.9](astral-sh/ruff-pre-commit@v0.11.8...v0.11.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 19.4 (apache#235) * lint(pyright): disable reportUnknownMemberType (apache#239) * [pre-commit.ci] pre-commit autoupdate (apache#236) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.9 → v0.11.13](astral-sh/ruff-pre-commit@v0.11.9...v0.11.13) - [github.com/RobertCraigie/pyright-python: v1.1.400 → v1.1.401](RobertCraigie/pyright-python@v1.1.400...v1.1.401) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: support pyarrow 20.0 (apache#240) * [pre-commit.ci] pre-commit autoupdate (apache#241) updates: - [github.com/RobertCraigie/pyright-python: v1.1.401 → v1.1.402](RobertCraigie/pyright-python@v1.1.401...v1.1.402) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * support docstring (apache#242) * doc: complete tensor doc * doc: complete table doc * doc: complete scalar doc * doc: complete orc doc * doc: complete memory doc * doc: complete lib doc * doc: complete json doc * doc: complete hdfs doc * doc: complete gcsfs doc * doc: complete fs doc * doc: complete flight doc * doc: complete dataset doc * doc: complete dataset parquet doc * doc: complete dataset parquet encryption doc * doc: complete cuda doc * doc: complete csv doc * doc: complete azurefs doc * doc: complete core doc * doc: complete interchange doc * doc: complete array doc * doc: complete builder doc * doc: complete device doc * doc: complete io doc * doc: complete ipc doc * doc: complete types doc * mark deprecated apis * doc: complete _compute doc * doc: complete compute doc * doc: update compute doc * lint code * release 20.0.0.20250618 (apache#243) * fix: make ParquetFileFormat constructor args optional (apache#244) * fix: Field.remove_metadata should return Self (apache#246) * [pre-commit.ci] pre-commit autoupdate (apache#245) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.13 → v0.12.0](astral-sh/ruff-pre-commit@v0.11.13...v0.12.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250627 (apache#247) * fix: chunked_array with type should be specified (apache#250) * [pre-commit.ci] pre-commit autoupdate (apache#248) updates: - [github.com/astral-sh/ruff-pre-commit: v0.12.0 → v0.12.3](astral-sh/ruff-pre-commit@v0.12.0...v0.12.3) - [github.com/RobertCraigie/pyright-python: v1.1.402 → v1.1.403](RobertCraigie/pyright-python@v1.1.402...v1.1.403) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250715 (apache#251) * fix: The type parameter of array should be covariant (apache#253) * release 20.0.0.20250716 (apache#254) * Add py.typed file to signify that the library is typed See the relevant PEP https://peps.python.org/pep-0561 * Prepare `pyarrow-stubs` for history merging MINOR: [Python] Prepare `pyarrow-stubs` for history merging Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> * Add `ty` configuration and suppress error codes * One line per rule * Add licence header from original repo for all `.pyi` files * Revert "Add licence header from original repo for all `.pyi` files" * Prepare for licence merging * Exclude `stubs` from `rat` test * Add Apache licence clause to `py.typed` * Reduce list * Resolve merge conflict --------- Signed-off-by: Jonas Dedden <university@jonas-dedden.de> Co-authored-by: ZhengYu, Xu <zen-xu@outlook.com> Co-authored-by: Jim Bosch <talljimbo@gmail.com> Co-authored-by: Oliver Mannion <125105+tekumara@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eugene Toder <eltoder@users.noreply.github.com> Co-authored-by: fvankrieken <fvankrieken@planning.nyc.gov> Co-authored-by: Ilia Ablamonov <ilia@flamefork.ru> Co-authored-by: Mathias Beguin <mathias.beguin@hotmail.com> Co-authored-by: Dylan Scott <dylan.scott@gmail.com> Co-authored-by: deanm0000 <37878412+deanm0000@users.noreply.github.com> Co-authored-by: Jan Moravec <moravecj@post.cz> Co-authored-by: Marius van Niekerk <marius.v.niekerk@gmail.com> Co-authored-by: Jonas Dedden <university@jonas-dedden.de> Co-authored-by: Fábio D. Batista <fabio@atelie.dev.br> Co-authored-by: ben-freist <93315290+ben-freist@users.noreply.github.com> Co-authored-by: Jiahao Yuan <kahojyun@icloud.com> Co-authored-by: Pim de Haan <pimdehaan@gmail.com> Co-authored-by: Dan Redding <125183946+dangotbanned@users.noreply.github.com> Co-authored-by: Tom Crasset <25140344+tcrasset@users.noreply.github.com> Co-authored-by: Tom McTiernan <tmct@users.noreply.github.com> Co-authored-by: Rok Mihevc <rok@mihevc.org>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I also removed an unneeded
Py_XDECREFfrom ARROW-30; didn't want to create a separate patch for that.