Skip to content

Releases: pola-rs/polars

Rust Polars 0.42.0

14 Aug 14:59
7686025
Compare
Choose a tag to compare

💥 Breaking changes

  • Reject literal input in sort_by_exprs() (#17606)

🚀 Performance improvements

  • Skip parquet page when unneeded (#18192)
  • Improve binview extend/ifthenelse (#18164)
  • Start on better Parquet delta decoding (#18049)
  • Tune jemalloc to not create muzzy pages (#18148)
  • Reduce default async thread count (#18142)
  • Use single threaded algorithms if only 1 core given (#18101)
  • Use Arc<Vec<_>> instead of Arc<[_]> for paths and hive partitions (#18066)
  • SIMD View from FixedSizeBinary (#18059)
  • Use bitmask to filter Parquet predicate-pushdown items (#17993)
  • Zerocopy buffers for FixedSizeBinary to BinaryView cast (#18043)
  • Integer fast path Parquet dict encoding (#18030)
  • Speedup writing of Parquet primitive values (#18020)
  • Remove temporary allocations in Parquet (#18013)
  • Delay selection expansion (#18011)
  • Optimize strings slices (#17996)
  • Make .dt.weekday 20x faster (#17992)
  • Shrink MemSliceInner enum (#17991)
  • Push down slice with non-zero offset to Parquet (#17972)
  • Reduce copy in MemSlice (#17983)
  • Ensure metadata flags are maintained on vertical parallelization (#17804)
  • Ensure only nodes that are not changed are cached in collapse optimizer (#17791)
  • Use bitflags for OptState (#17788)
  • Remove async directory auto-detection (#17779)
  • Fix accidental quadratic horizontal concat (#17783)
  • Batch parquet integer decoding (#17734)
  • Use mmap-ed memory if possible in Parquet reader (#17725)
  • Use bitflags for function options (#17723)
  • Introduce MemReader to file buffer in Parquet reader (#17712)
  • Better GC and push_view for binviews (#17627)
  • Fix pathological perf issue in window-order-by (#17650)
  • Cache path resolving of scan functions (#17616)
  • Add ArrayChunks to optimize codegen of BatchDecoder (#17632)
  • Rechunk before we go into grouped gathers (#17623)
  • Cache schema resolve back to DSL (#17610)
  • Add fastpath for when rounding by single constant durations (#17580)
  • Improve parallelism in writing hive parquet (#17512)
  • Support datetime in predicate during hive partition pruning (#17545)
  • Batch nested embed parquet decoding (#17549)
  • Batch nested Parquet decoding (#17542)
  • Collect Parquet dictionary binary as view (#17475)
  • Keep more parallelism when CSE plan cache hits (#17463)
  • Batch parquet primitive decoding (#17462)
  • Respect allow_threading in some more operators (#17450)
  • Parallelize parquet metadata deserialization (#17399)

✨ Enhancements

  • Create literals for datetime/date expressions (#18184)
  • Create literals in 'datetime' expression (#18182)
  • Add missing impl for Series (#18166)
  • Raise on invalid 'is_between' and improve error message quality (#18147)
  • Add boolean Parquet HybridRle encoding (#18022)
  • Add nested SQL join support (#18006)
  • Push down slice with non-zero offset to Parquet (#17972)
  • Add support for binary size method to Expr and Series "bin" namespace (#17924)
  • Add SQL interface support for PostgreSQL dollar-quoted string literals (#17940)
  • Allow for parsing parquet file where the time zone is stored as lowercase "utc" (#17925)
  • Expose binary_elementwise_into_string_amortized for plugin authors, recommend apply_into_string_amortized instead of apply_to_buffer (#17903)
  • Decompress in CSV / NDJSON scan (#17841)
  • Ensure unique names in HConcat (#17884)
  • Support authentication with HuggingFace login (#17881)
  • Support "BY NAME" qualifier for SQL "INTERSECT" and "EXCEPT" set ops (#17835)
  • Raise informative error instead of panicking when passing invalid directives to to_string for Date dtype (#17670)
  • Implement forward/backward fill for all types (#17861)
  • Implement is_in operation on decimal type (#17832)
  • Support hf:// in read_(csv|ipc|ndjson) functions (#17785)
  • Allow literals in sort (#17780)
  • Cloud support for NDJSON (#17717)
  • Support API token for scanning hf:// (#17682)
  • Raise error instead of panic in unsupported serde (#17679)
  • Include file path option for NDJSON (#17681)
  • Hugging Face path expansion (#17665)
  • Add DSL validation for cloud eligible check (#17287)
  • Raise informative error message if non-IntoExpr is passed by name in *Frame.group_by (#17654)
  • Change API for writing partitioned Parquet to reduce code duplication (#17586)
  • Cache schema resolve back to DSL (#17610)
  • Expose returns_scalar to map_elements (#17613)
  • Add option to include file path for Parquet, IPC, CSV scans (#17563)
  • Support describe on decimal (#15092)
  • Support datetime in predicate during hive partition pruning (#17545)
  • Raise more informative error message for directories containing files with mixed extensions (#17480)
  • Exclude empty files from directory/glob expansion (#17478)
  • Add "future" versioning (#17421)
  • Apply slice pushdown immediately to in-memory frames (#17459)
  • Support writing hive partitioned parquet (#17324)
  • Add right join support (#17441)
  • Support hive partitioning in scan_ipc (#17434)

🐞 Bug fixes

  • Fix struct shift and list builder (#18189)
  • Don't load Parquet nested metadata (#18183)
  • Throw bigidx error for Parquet row-count (#18154)
  • Fix unpivot on empty df (#18179)
  • Don't vertically parallelize cse contexts (#18177)
  • Properly handle empty Parquet row groups with no dictionary (#18161)
  • Struct outer nullabillity (#18156)
  • Fix pyarrow predicate pushdown regression (#18145)
  • Prevent unwanted supertype cast in 'search_sorted' (#18143)
  • Parquet with filter=None (#18139)
  • Don't raise when converting from pandas if index contains duplicate names when include_index=False (the default) (#18133)
  • Don't remove leading whitespace in read_csv (#18131)
  • Py-polars compilation with no features (#18129)
  • String transform to_titlecase was too narrowly defined (#18122)
  • Reading Parquet with Null dictionary page (#18112)
  • Incorrect lazy CSV select(len()) for compressed files (#18067)
  • Fix sink_ipc_cloud panicking with runtime error (#18091)
  • Properly write Parquet for sliced lists (#18073)
  • Panic reading multiple CSV files from cloud (#18056)
  • Fix CloudWriter to use buffer before making requests (#18027)
  • Fix typos and remove trailing whitespace (#18024)
  • Handle cfg(feature) for shrink_dtype (#18038)
  • Subtraction with overflow on negative slice offset in Parquet (#18036)
  • Add nested SQL join support (#18006)
  • Allow read_csv schema to take unparsable types (#17765)
  • Multi-output column expressions in frame sort method (#17947)
  • Fix Asof join by schema (#17988)
  • Fix glob resolution for Hugging Face (#17958)
  • Several parquet reader/writer regressions (#17941)
  • Incorrect filter on categorical columns from parquet files (#17950)
  • SQL COUNT(DISTINCT x) should not include NULL values (#17930)
  • Scanning '%' from cloud (#17890)
  • Respect glob=False for cloud reads (#17860)
  • Properly write nest-nulled values in Parquet (#17845)
  • Allow full-null Object series to be built (#17870)
  • Fix from_arrow for struct type (#17839)
  • Infer decimal scales on mixed scale input (#17840)
  • Raise on unsupported fill strategy dtype (#17837)
  • Properly write nested NullArray in Parquet (#17807)
  • Check input type on list.to_struct (#17834)
  • Fix right join schema (#17833)
  • Non-compliant Parquet list element name (#17803)
  • Correctly set should_broadcast flag in HStack CSE rewrite (#17784)
  • Fix projection pusdhown of literals without names (#17778)
  • Don't expand HTTP paths (#17774)
  • Check funtion input len at expansion (#17763)
  • Don't panic in invalid agg_groups (#17762)
  • Raise empty struct (#17736)
  • Fix GC logic in write_ipc (#17752)
  • Panic in pl.concat_list and list.concat on empty inputs (#17742)
  • Fix out nullability for structs coming from arrow (#17738)
  • Percent encode for Hugging Face paths (#17718)
  • Use bytemuck in slice reinterpret for Parquet ArrayChunks (#17700)
  • Propagate struct outer nullability eagerly (#17697)
  • Use ETag for HTTP file cache invalidation (#17684)
  • Fix type inference failure caused by double transpose (#17663)
  • Interpret %y consistently with Chrono in to_date/to_datetime/strptime (#17661)
  • Fix explode invalid check (#17651)
  • Tighten up error checking on join keys (#17517)
  • Expand brackets in async glob expansion (#17630)
  • Fix row index disappearing after projection pushdown in NDJSON (#17631)
  • Fix struct -> enum is_in (#17622)
  • Don't needlessly unwrap in pivot_schema (#17611)
  • Reject literal input in sort_by_exprs() (#17606)
  • Bitmap collect into safety (#17588)
  • Method dt.truncate was sometimes returning incorrect results for pre-1970 datetimes (#17582)
  • Defer path expansion until collect in file scan methods (#17532)
  • Correct logic for descending sort of BooleanChunked (#17558)
  • Don't unwrap send attempt to oneshot channel (#17566)
  • Fix scanning from HTTP cloud paths (#17571)
  • Properly implement struct (#17522)
  • Add missing commas in python IR interchange (#17518)
  • Fix predicate pushdown for .list.(get|gather) (#17511)
  • Turn panic into error when serializing Object types (#17353)
  • Fix struct expansion and raise on exclude (#17489)
  • Fix decimal dyn float supertype (#17464)
  • Don't rechunk on phys_repr (#17461)
  • Harden alchemy session for old sqlalchemy versions (#17366)
  • Fix swapping rename schema (#17458)
  • Raise on oob decimal precision (#17445)
  • Don't allow json inference method to be chunked/streaming (#17396)
  • avoid panic when projecting solitary count into empty frame (#17393)
  • Set literal nesting to 0 (#17392)
  • Fix scanning cloud paths with spaces (#17379)
  • Fix slice length no longer allowing None (#17372)
  • Cull row index in scan if projection pushdown removes it (#17363)
  • Fix typo in SchemaError exception message (#17350)

📖 Documentation

  • Mention 'Array' in data types overview (#18060)
  • Correct concat rech...
Read more

Python Polars 1.5.0

14 Aug 19:02
Compare
Choose a tag to compare

🚀 Performance improvements

  • Improve binview extend/ifthenelse (#18164)
  • Start on better Parquet delta decoding (#18049)
  • Rechunk group-by __iter__ (#18162)
  • Tune jemalloc to not create muzzy pages (#18148)
  • Reduce default async thread count (#18142)
  • Make expensive selector expansion lazy (#18118)
  • Use single threaded algorithms if only 1 core given (#18101)
  • Use Arc<Vec<_>> instead of Arc<[_]> for paths and hive partitions (#18066)
  • SIMD View from FixedSizeBinary (#18059)
  • Use bitmask to filter Parquet predicate-pushdown items (#17993)
  • Zerocopy buffers for FixedSizeBinary to BinaryView cast (#18043)

✨ Enhancements

  • Create literals for datetime/date expressions (#18184)
  • Create literals in 'datetime' expression (#18182)
  • Expose top-level "has_header" param for read_excel and read_ods (#18078)
  • Raise on invalid 'is_between' and improve error message quality (#18147)

🐞 Bug fixes

  • Fix struct shift and list builder (#18189)
  • Don't load Parquet nested metadata (#18183)
  • Throw bigidx error for Parquet row-count (#18154)
  • Fix unpivot on empty df (#18179)
  • Don't vertically parallelize cse contexts (#18177)
  • Ensure default values are included when saving/restoring the current Config state (#18151)
  • Properly handle empty Parquet row groups with no dictionary (#18161)
  • Struct outer nullabillity (#18156)
  • Fix pyarrow predicate pushdown regression (#18145)
  • Prevent unwanted supertype cast in 'search_sorted' (#18143)
  • Parquet with filter=None (#18139)
  • Don't raise when converting from pandas if index contains duplicate names when include_index=False (the default) (#18133)
  • Fix cast Float to String where Float is not turn to Integer before turning to String (#18123)
  • Don't remove leading whitespace in read_csv (#18131)
  • Py-polars compilation with no features (#18129)
  • String transform to_titlecase was too narrowly defined (#18122)
  • Reading Parquet with Null dictionary page (#18112)
  • When setting write_excel column totals, don't forget to include any row-total cols (#18042)
  • Incorrect lazy CSV select(len()) for compressed files (#18067)
  • Fix sink_ipc_cloud panicking with runtime error (#18091)
  • Properly write Parquet for sliced lists (#18073)
  • Panic reading multiple CSV files from cloud (#18056)
  • Fix CloudWriter to use buffer before making requests (#18027)
  • Fix typos and remove trailing whitespace (#18024)
  • Handle cfg(feature) for shrink_dtype (#18038)

📖 Documentation

  • Fix references to old methods in lazy docstring (#18178)
  • Include PyCapsule Interface in DataFrame and Series API docs (#18174)
  • Corrected example result in group_by docs (#18169)
  • Mention 'Array' in data types overview (#18060)
  • Correct concat rechunk in user guide (#18080)
  • Fix typo in title of Hugging Face docs page (#18097)
  • Update pivot docstring for clarity (#18000)

🛠️ Other improvements

  • Remove unneeded growable (#18165)
  • Update Cargo.lock to fix build error on Linux (#18153)
  • Remove Nth,Wildcard from ExprIR and make conversion falllible (#18115)

Thank you to all our contributors for making this release possible!
@EricTulowetzke, @KDruzhkin, @MarcoGorelli, @Vincenthays, @alexander-beedie, @coastalwhite, @davanstrien, @deanm0000, @ember91, @kylebarron, @mcrumiller, @nameexhaustion, @orlp, @philss, @ritchie46 and @rosstitmarsh

Python Polars 1.4.1

04 Aug 12:51
0c2b5d8
Compare
Choose a tag to compare

🚀 Performance improvements

  • Integer fast path Parquet dict encoding (#18030)
  • Speedup writing of Parquet primitive values (#18020)
  • Remove temporary allocations in Parquet (#18013)

✨ Enhancements

  • Add boolean Parquet HybridRle encoding (#18022)
  • Support passing Worksheet objects to the write_excel method (#18031)

🐞 Bug fixes

  • Subtraction with overflow on negative slice offset in Parquet (#18036)
  • Fix drop selector (#18034)

📖 Documentation

  • Update map_batches docstring (#18001)

🛠️ Other improvements

Thank you to all our contributors for making this release possible!
@alexander-beedie, @coastalwhite, @deanm0000, @nameexhaustion and @ritchie46

Python Polars 1.4.0

02 Aug 10:35
618a710
Compare
Choose a tag to compare

🚀 Performance improvements

  • Delay selection expansion (#18011)
  • Optimize strings slices (#17996)
  • Make .dt.weekday 20x faster (#17992)
  • Shrink MemSliceInner enum (#17991)
  • Push down slice with non-zero offset to Parquet (#17972)
  • Reduce copy in MemSlice (#17983)

✨ Enhancements

  • Add nested SQL join support (#18006)
  • Push down slice with non-zero offset to Parquet (#17972)
  • Add support for binary size method to Expr and Series "bin" namespace (#17924)
  • IO plugins (#17939)
  • Add SQL interface support for PostgreSQL dollar-quoted string literals (#17940)
  • Allow for parsing parquet file where the time zone is stored as lowercase "utc" (#17925)

🐞 Bug fixes

  • Add nested SQL join support (#18006)
  • Respect strict argument (#17990)
  • Multi-output column expressions in frame sort method (#17947)
  • Fix Asof join by schema (#17988)
  • Set default flags for FFI plugin (#17984)
  • Fix glob resolution for Hugging Face (#17958)
  • Several parquet reader/writer regressions (#17941)
  • Incorrect filter on categorical columns from parquet files (#17950)
  • SQL COUNT(DISTINCT x) should not include NULL values (#17930)
  • Default to None in pycapsule interface export (#17922)

📖 Documentation

  • Fix aggregation guide discrepancies (#18003)
  • Ensure last is never ambiguous with max (#17962)
  • Documentation for Arrow PyCapsule interface integration (#17935)
  • Fix Hugging Face link in user guide (#17943)

🛠️ Other improvements

  • Add unit tests for str.contains_any and str.replace_many (#17961)
  • Suggest allow_null as replacement (#17969)
  • Remove apply_generic, use unary_elementwise (#17902)
  • Add general filters in Parquet (#17910)

Thank you to all our contributors for making this release possible!
@JamesCE2001, @MarcoGorelli, @alexander-beedie, @coastalwhite, @deanm0000, @deepyaman, @dependabot, @dependabot[bot], @henryharbeck, @kylebarron, @nameexhaustion, @ritchie46 and @wangxiaoying

Python Polars 1.3.0

28 Jul 09:54
Compare
Choose a tag to compare

🚀 Performance improvements

  • Ensure metadata flags are maintained on vertical parallelization (#17804)
  • Ensure only nodes that are not changed are cached in collapse optimizer (#17791)
  • Use bitflags for OptState (#17788)
  • Remove async directory auto-detection (#17779)
  • Fix accidental quadratic horizontal concat (#17783)
  • Batch parquet integer decoding (#17734)
  • Use mmap-ed memory if possible in Parquet reader (#17725)
  • Use bitflags for function options (#17723)
  • Also set target features and tune cpu for CC (#17716)
  • Introduce MemReader to file buffer in Parquet reader (#17712)

✨ Enhancements

  • Expose binary_elementwise_into_string_amortized for plugin authors, recommend apply_into_string_amortized instead of apply_to_buffer (#17903)
  • Expose allocator to capsule (#17817)
  • Decompress in CSV / NDJSON scan (#17841)
  • Ensure unique names in HConcat (#17884)
  • Support authentication with HuggingFace login (#17881)
  • Enable collection with gpu engine (#17550)
  • Support "BY NAME" qualifier for SQL "INTERSECT" and "EXCEPT" set ops (#17835)
  • Write data at table level in write_excel (#17757)
  • Support PyCapsule Interface in DataFrame & Series constructors (#17693)
  • Implement Arrow PyCapsule Interface for Series/DataFrame export (#17676)
  • Raise informative error instead of panicking when passing invalid directives to to_string for Date dtype (#17670)
  • Implement forward/backward fill for all types (#17861)
  • Implement is_in operation on decimal type (#17832)
  • Optimise read_excel when using "calamine" engine with the latest fastexcel (#17735)
  • Support hf:// in read_(csv|ipc|ndjson) functions (#17785)
  • Allow literals in sort (#17780)
  • Expose 'strict' argument to 'is_in' (#17776)
  • Release the GIL in collect_schema (#17761)
  • Cloud support for NDJSON (#17717)
  • Support API token for scanning hf:// (#17682)

🐞 Bug fixes

  • Scanning '%' from cloud (#17890)
  • Raise suitable error when invalid column passed to get_column_index (#17868)
  • Respect glob=False for cloud reads (#17860)
  • Properly write nest-nulled values in Parquet (#17845)
  • Improve default write_excel int/float format when using a dark "table_style" (#17869)
  • Fix from_arrow for struct type (#17839)
  • Fix bool/string usage of "column_totals" parameter in write_excel (#17846)
  • Infer decimal scales on mixed scale input (#17840)
  • Don't ignore timezones in list of dicts constructor (#14211)
  • Raise on unsupported fill strategy dtype (#17837)
  • Properly write nested NullArray in Parquet (#17807)
  • Check input type on list.to_struct (#17834)
  • Fix right join schema (#17833)
  • Simultaneous usage of named_expr and schema in pl.struct (#17768)
  • Fix projection pusdhown of literals without names (#17778)
  • Don't expand HTTP paths (#17774)
  • Check funtion input len at expansion (#17763)
  • Don't panic in invalid agg_groups (#17762)
  • Raise empty struct (#17736)
  • Fix GC logic in write_ipc (#17752)
  • Panic in pl.concat_list and list.concat on empty inputs (#17742)
  • Fix out nullability for structs coming from arrow (#17738)
  • Percent encode for Hugging Face paths (#17718)

📖 Documentation

  • Updating the join example input for rust for consistency with python example (#17898)
  • Improve filter documentation (#17755)
  • Reword "how" param docstring entry for 'semi' and 'anti' join types for clarity (#17843)
  • Mention read_* functions in Hugging Face section in user guide (#17799)
  • Show return type for Series attributes in API reference (#17759)
  • Add function with multiple arguments example to Expr.map_batches (#17789)
  • Add Hugging Face section to user guide (#17721)

📦 Build system

  • Update Rust toolchain to nightly-2024-07-26 (#17891)
  • Correctly reference released package in optional dependencies (#17691)

🛠️ Other improvements

  • On Python release, trigger docs build after API reference build (#17904)
  • Set uv pip install to verbose (#17901)
  • Fix broken typos command in make pre-commit for py-polars folder (#17897)
  • Remove HybridRLE iter / batch nested parquet decoding (#17889)
  • Add version field for python IR (#17876)
  • Pass through missing rolling and stringfunction information in pyir (#17702)
  • Make better use of typos configuration features (#17800)
  • Better deprecate message for _import_from_c (#17753)
  • Rename Unit to Plain in Parquet reader (#17751)
  • Unpin setuptools (#17726)
  • Update CODEOWNERS (#17707)

Thank you to all our contributors for making this release possible!
@MarcoGorelli, @Object905, @SandroCasagrande, @alexander-beedie, @atigbadr, @coastalwhite, @deanm0000, @delsner, @dependabot, @dependabot[bot], @henryharbeck, @implicit-apparatus, @jparag, @knl, @kylebarron, @lukapeschke, @mcrumiller, @nameexhaustion, @orlp, @ritchie46, @ruihe774, @stinodego, @szepeviktor and @wence-

Python Polars 1.2.1

18 Jul 18:12
c9e9757
Compare
Choose a tag to compare

🚀 Performance improvements

  • Specify tune-cpu & add more features (#17615)
  • Better GC and push_view for binviews (#17627)

✨ Enhancements

  • Raise error instead of panic in unsupported serde (#17679)
  • Expose Arrow C interface directly on Polars (#17696)
  • Include file path option for NDJSON (#17681)

🐞 Bug fixes

  • Use bytemuck in slice reinterpret for Parquet ArrayChunks (#17700)
  • Remove non-existing names from __all__ (#17494)
  • Fix return type hint for LazyFrame sink methods (#17698)
  • Propagate struct outer nullability eagerly (#17697)
  • Address read_database issue with batched reads from Snowflake (#17688)
  • Use ETag for HTTP file cache invalidation (#17684)

📖 Documentation

  • Fixed default name for value_counts methods based on normalize parameter (#17685)

📦 Build system

  • Pin setuptools to fix failing CI (#17695)

🛠️ Other improvements

  • Fix return type hint for LazyFrame sink methods (#17698)
  • Pin setuptools to fix failing CI (#17695)
  • Name tests so they actually run (#17690)
  • Add reduce ComputeNode in new streaming engine (#17389)

Thank you to all our contributors for making this release possible!
@5j9, @ByteNybbler, @MarcoGorelli, @alexander-beedie, @coastalwhite, @diegoglozano, @eitsupi, @nameexhaustion, @orlp, @ragyabraham, @ritchie46 and @ruihe774

Python Polars 1.2.0

16 Jul 16:14
38321a5
Compare
Choose a tag to compare

🚀 Performance improvements

  • Fix pathological perf issue in window-order-by (#17650)
  • Cache path resolving of scan functions (#17616)
  • Add ArrayChunks to optimize codegen of BatchDecoder (#17632)
  • Rechunk before we go into grouped gathers (#17623)
  • Cache schema resolve back to DSL (#17610)
  • Add fastpath for when rounding by single constant durations (#17580)
  • Improve parallelism in writing hive parquet (#17512)
  • Support datetime in predicate during hive partition pruning (#17545)
  • Batch nested embed parquet decoding (#17549)
  • Batch nested Parquet decoding (#17542)
  • Collect Parquet dictionary binary as view (#17475)

✨ Enhancements

  • Hugging Face path expansion (#17665)
  • Add DSL validation for cloud eligible check (#17287)
  • Raise informative error message if non-IntoExpr is passed by name in *Frame.group_by (#17654)
  • Add infer_schema parameter to read_csv / scan_csv (#17617)
  • Change API for writing partitioned Parquet to reduce code duplication (#17586)
  • Cache schema resolve back to DSL (#17610)
  • Expose returns_scalar to map_elements (#17613)
  • Add option to include file path for Parquet, IPC, CSV scans (#17563)
  • Support describe on decimal (#15092)
  • Support datetime in predicate during hive partition pruning (#17545)
  • Raise more informative error message for directories containing files with mixed extensions (#17480)
  • Exclude empty files from directory/glob expansion (#17478)
  • Support use of SQLAlchemy "Connectable" in write_database (#17470)

🐞 Bug fixes

  • Support duplicate expression names when calling ufuncs (#17641)
  • Interpret %y consistently with Chrono in to_date/to_datetime/strptime (#17661)
  • Fix explode invalid check (#17651)
  • Raise for overlapping index/column names in pandas dataframes post string coercion (#17628)
  • Expand brackets in async glob expansion (#17630)
  • Fix row index disappearing after projection pushdown in NDJSON (#17631)
  • Fix struct -> enum is_in (#17622)
  • Don't needlessly unwrap in pivot_schema (#17611)
  • Reject literal input in sort_by_exprs() (#17606)
  • Don't enforce row order in join test results where not guaranteed (#17596)
  • Bitmap collect into safety (#17588)
  • Make schema picklable (#17524)
  • Handle current position of file objects (#17543)
  • Set O_CLOEXEC on duplicated file descriptor (#17537)
  • Method dt.truncate was sometimes returning incorrect results for pre-1970 datetimes (#17582)
  • Defer path expansion until collect in file scan methods (#17532)
  • Fix retries parameter in scan functions not taking effect when it was set to 0 (#17564)
  • Don't unwrap send attempt to oneshot channel (#17566)
  • Fix scanning from HTTP cloud paths (#17571)
  • Properly implement struct (#17522)
  • Add right to lazyframe join docstring (#17529)
  • Fix predicate pushdown for .list.(get|gather) (#17511)
  • Make sure scan_ipc does not go through fsspec (#17495)
  • Turn panic into error when serializing Object types (#17353)
  • Fix struct expansion and raise on exclude (#17489)
  • Normalize path in sink_csv (#17476)

📖 Documentation

  • Update plot docs to refer to docstrings (#17504)
  • Rename str.lengths to str.len_bytes in description text (#11577) (#17626)
  • Create example for polars.Expr.bin.decode (#17508)
  • Add right join in the user guide (#17608)
  • Adjust rendering of links in read_database_uri docstring (#17536)
  • Update SQL examples in README (#17568)
  • Fixup "deprecated" directive for DataFrame.melt and LazyFrame.melt (#17530)
  • Add write_parquet_partitioned (#17488)
  • Add example for writing hive partitioned parquet to user guide (#17483)
  • Fix typo in Getting Started section of user guide (#17465)

🛠️ Other improvements

  • Add DSL validation for cloud eligible check (#17287)
  • Add ArrayChunks to optimize codegen of BatchDecoder (#17632)
  • Move path logic to from utils to path_utils in polars-io (#17635)
  • Fix struct gather (#17621)
  • Back to StructChunked name (#17609)
  • Remove unused with_column method of PyLazyFrame (#17607)
  • Re-enable struct related tests (#17597)
  • Completely redo structure of Parquet decoder (#17589)
  • Fix struct outer validity;fmt;is_in;cast;cmp (#17590)
  • Add/fix version-gating in some SQLAlchemy and Pandas tests (#17538)
  • Add style accessor to DataFrame (#17502)
  • Remove unused is_supported_cloud util (#17493)

Thank you to all our contributors for making this release possible!
@Julian-J-S, @MarcoGorelli, @alexander-beedie, @anergictcell, @arnabanimesh, @brandon-b-miller, @cmdlineluser, @coastalwhite, @deanm0000, @eitsupi, @flisky, @henryharbeck, @itamarst, @jonaylor89, @moritzwilksch, @nameexhaustion, @orlp, @phi-friday, @r-brink, @rcorty, @ritchie46, @ruihe774, @stinodego, @tylerriccio33 and @wence-

Python Polars 1.1.0

07 Jul 11:54
a191a09
Compare
Choose a tag to compare

🚀 Performance improvements

  • Keep more parallelism when CSE plan cache hits (#17463)
  • Batch parquet primitive decoding (#17462)
  • Respect allow_threading in some more operators (#17450)
  • Parallelize parquet metadata deserialization (#17399)
  • Use underlying fileno for Python files when possible (#17315)
  • Add future arg to Series.to_arrow (#17371)

✨ Enhancements

  • Add "future" versioning (#17421)
  • Apply slice pushdown immediately to in-memory frames (#17459)
  • Support writing hive partitioned parquet (#17324)
  • Add right join support (#17441)
  • Support hive partitioning in scan_ipc (#17434)
  • Improve error message when passing string key to Series.__getitem__ (#17408)

🐞 Bug fixes

  • Handle DB cursor descriptions that contain more fields than the DBAPI2 standard (#17468)
  • Fix decimal dyn float supertype (#17464)
  • Verify the integrity of pandas column names before implied string conversion (#17433)
  • Don't rechunk on phys_repr (#17461)
  • Harden alchemy session for old sqlalchemy versions (#17366)
  • Fix swapping rename schema (#17458)
  • Make boolean reads consistent across all read_excel engines (#17448)
  • Raise on oob decimal precision (#17445)
  • Fix handling of TextIOWrapper in write_csv (#17328)
  • Support sa session (#17435)
  • Fix from_pandas for string columns with missing values (#17397)
  • Fix a global variable table-discovery edge case for the SQL interface (#17400)
  • Don't allow json inference method to be chunked/streaming (#17396)
  • Set literal nesting to 0 (#17392)
  • Fix scanning cloud paths with spaces (#17379)
  • Fix slice length no longer allowing None (#17372)
  • Fix typo in SchemaError exception message (#17350)
  • Raise proper error for mismatching parquet schema instead of panicking (#17321)

📖 Documentation

  • Add examples for scanning hive datasets to user guide (#17431)
  • Update partition_by docstring to match new behavior (#17394)
  • Update GroupBy.__iter__ docstring to match new behavior (#17383)

📦 Build system

  • Add support for NumPy 2.0 (#17384)

🛠️ Other improvements

  • Add automated check for PR title formatting (#17412)
  • Remove transmute for object store path (#17395)
  • Fix Python version resolver in release drafter (#17390)
  • Avoid use of np.trapz in tests to prepare for NumPy 2.0 (#17387)
  • Avoid writing to disk when running sink_csv test (#17386)

Thank you to all our contributors for making this release possible!
@alexander-beedie, @brunobbaraujo, @cmdlineluser, @coastalwhite, @dependabot, @dependabot[bot], @nameexhaustion, @orlp, @phi-friday, @ritchie46, @ruihe774, @sherlockbeard, @stinodego, @tylerriccio33 and @wence-

Rust Polars 0.41.3

02 Jul 09:37
91a423f
Compare
Choose a tag to compare

🚀 Performance improvements

  • Improve unique performance by adding RangedUniqueKernel for primitive arrays (#17166)
  • faster decode on Parquet HybridRLE (#17208)

✨ Enhancements

  • Add SQL support for NATURAL joins and the COLUMNS function (#17295)
  • Add str.extract_many expression (#17304)
  • Support '%' in pathnames for async scan (#17271)
  • Support SQL Struct/JSON field access operators (#17226)
  • Exclude directories from glob expansion result (#17174)
  • Support SQL ORDER BY ALL syntax (#17212)
  • Support PostgreSQL ^@ ("starts with"), and ~~,~~*,!~~,!~~* ("like", "ilike") string-matching operators (#17251)
  • Support SQL SELECT * ILIKE wildcard syntax (#17169)
  • Support SQL temporal functions STRFTIME and STRPTIME, and typed literal syntax (#17245)
  • Support date/datetime for hive parts (#17256)
  • Expose some more information in translated expression IR to python (#17209)
  • Allow no-op round/ceil/floor on integer types (#17241)
  • Support loading from datasets where the hive columns are also stored in the file (#17203)
  • Implement serde for Null columns (#17218)
  • Support Decimal types in write_csv/write_json (#14209)
  • Improve SQL support for array indexing, increase test coverage (#16972)
  • Support reading byte stream split encoded floats and doubles in parquet (#17099)
  • Add float_scientific option to write_csv/sink_csv (#17111)

🐞 Bug fixes

  • Raise proper error for mismatching parquet schema instead of panicking (#17321)
  • Raise on invalid shape dataframe arithmetic (#17322)
  • Fix panic in window case (#17320)
  • Raise errors instead of panicking when sink_csv fails (#17313)
  • Raise if join keys are passed to cross join (#17305)
  • Don't null on oob in list.get for column index (#17276)
  • Fix issue where sliced PyArrow record batches were not handled correctly (#17058)
  • Don't oob on nulls in list.get (#17262)
  • Fix list getter with nulls (#17261)
  • Respect nulls_last parameter in aggregate sort_by (#17249)
  • Fix literal slice in group by (#17242)
  • Fix DataFrame.top_k not handling nulls correctly (#17239)
  • Avoid using the regex dependency when the regex feature is not used (#17206)
  • properly check the BMI2 uleb128 (#17191)

📖 Documentation

  • Minor layout/terminology improvement for selector set ops (#17299)
  • Fix polars-plan docs.rs build (#17266)
  • Add SQL docs for the CAST and TRY_CAST functions (#17214)

🛠️ Other improvements

  • Prefer ParquetError::oos to ParquetError::OutOfSpec (#17314)
  • remove seqmacro and u8,u16 bitpack (#17290)
  • Fix typo in join validation error message (#17296)
  • Use typed iter in list.get (#17286)
  • add ability to have pipeline blockers in new streaming engine (#17247)
  • Support date/datetime for hive parts (#17256)
  • Add elementwise select and with_columns to new streaming engine (#17185)
  • chrono's ParseErrorKind is now public (#17201)

Thank you to all our contributors for making this release possible!
@IvanIsCoding, @JamesCE2001, @MarcoGorelli, @SeanTater, @adamreeve, @alexander-beedie, @coastalwhite, @datapythonista, @flisky, @itamarst, @jqnatividad, @lukeshingles, @mcrumiller, @nameexhaustion, @orlp, @ritchie46, @stinodego and @wence-

Python Polars 1.0.0

01 Jul 10:40
f73937a
Compare
Choose a tag to compare

This is the first major release for Python Polars. Please check out the upgrade guide for help navigating the breaking changes when upgrading to this version.

💥 Breaking changes

  • Change default engine for read_excel to "calamine" (#17263)
  • Implement binary serialization of LazyFrame/DataFrame/Expr and set it as the default format (#17223)
  • Streamline optional dependency definitions in pyproject.toml (#17168)
  • Update read/scan_parquet to disable Hive partitioning by default for file inputs (#17106)
  • Split replace functionality into two separate methods (#16921)
  • Default to writing binview data to IPC, mark compression argument as keyword-only (#17084)
  • Remove re-export of type aliases (#17032)
  • Rename ModuleUpgradeRequired and PolarsPanicError error, remove InvalidAssert error (#17033)
  • Change data orientation inference logic for DataFrame construction and warn when row orientation is inferred (#16976)
  • Properly apply strict parameter in Series constructor (#16939)
  • Remove supertype definition of List and non-List types (#16918)
  • Consistently convert to given time zone in Series constructor (#16828)
  • Update reshape to return Array types instead of List types (#16825)
  • Default to raising on out-of-bounds indices in all get/gather operations (#16841)
  • Native selector XOR set operation, guarantee consistent selector column-order (#16833)
  • Set infer_schema_length as keyword-only argument in str.json_decode (#16835)
  • Update set_sorted to only accept a single column (#16800)
  • Remove deprecated parameters in Series.cut/qcut and update struct field names (#16741)
  • Expedited removal of certain deprecated functionality (#16754)
  • Update some error types to more appropriate variants (#15030)
  • Scheduled removal of deprecated functionality (#16715)
  • Change default offset in group_by_dynamic from 'negative every' to 'zero' (#16658)
  • Constrain access to globals from DataFrame.sql in favor of top-level pl.sql (#16598)
  • Read 2D NumPy arrays as Array type instead of List (#16710)
  • Update clip to no longer propagate nulls in the given bounds (#14413)
  • Change str.to_datetime to default to microsecond precision for format specifiers "%f" and "%.f" (#13597)
  • Update resulting column names in pivot when pivoting by multiple values (#16439)
  • Preserve nulls in ewm_mean, ewm_std, and ewm_var (#15503)
  • Restrict casting for temporal data types (#14142)
  • Support Decimal types by default when converting from Arrow (#15324)
  • Remove serde functionality from pl.read_json and DataFrame.write_json (#16550)
  • Update function signature of nth to allow positional input of indices, remove columns parameter (#16510)
  • Rename struct fields of rle output to len/value and update data type of len field (#15249)
  • Remove class variables from some DataTypes (#16524)
  • Add check_names parameter to Series.equals and default to False (#16610)

⚠️ Deprecations

  • Deprecate LazyFrame.fetch (#17278)
  • Deprecate size parameter in parametric testing strategies in favor of min_size/max_size (#17128)
  • Split replace functionality into two separate methods (#16921)
  • Rename DataFrame.melt to unpivot and make parameters consistent with pivot (#17095)
  • Remove re-export of exceptions at top-level (#17059)
  • Deprecate dt.mean/dt.median in favor of mean/median (#16888)
  • Deprecate LazyFrame.with_context in favor of horizontal concatenation (#16860)
  • Rename parameter descending to reverse in top_k methods (#16817)
  • Rename str.concat to str.join and update default delimiter (#16790)
  • Deprecate arctan2d in favor of arctan2(...).degrees() (#16786)

🚀 Performance improvements

  • Rechunk before group_by `iteration (#17302)
  • Improve unique performance by adding RangedUniqueKernel for primitive arrays (#17166)
  • Improve unique performance by creating UniqueKernel and improve bool implementation (#17160)
  • Default to writing binview data to IPC, mark compression argument as keyword-only (#17084)
  • Parallelize arrow conversion if binview -> large_bin (#17083)
  • Garbage collect buffers in if-then-else view kernel (#16993)
  • Desugar AND filter into multiple nodes (#16992)
  • Optimize generic arg_sort of row-encoding (#16894)
  • Improve rle_id iteration performance and set sorted flags (#16893)
  • Optimize sort for String and Binary types (#16871)
  • Use split_at in split (#16865)
  • Use split_at instead of double slice in chunk splits. (#16856)
  • Don't rechunk in align_ if arrays are aligned (#16850)
  • Don't create small chunks in parallel collect. (#16845)
  • Add dedicated no-null branch in arg_sort (#16808)
  • Speed up dt.offset_by 2x for constant durations (#16728)
  • Toggle coalesce in join if non-coalesced key isn't projected (#16677)
  • Make dt.truncate 1.5x faster when every is just a single duration (and not an expression) (#16666)
  • Always prune unused columns in semi/anti join (#16665)

✨ Enhancements

  • Add SQL support for NATURAL joins and the COLUMNS function (#17295)
  • Add str.extract_many expression (#17304)
  • Change default engine for read_excel to "calamine" (#17263)
  • Deprecate LazyFrame.fetch (#17278)
  • Support '%' in pathnames for async scan (#17271)
  • Support SQL Struct/JSON field access operators (#17226)
  • Exclude directories from glob expansion result (#17174)
  • Support SQL ORDER BY ALL syntax (#17212)
  • Support PostgreSQL ^@ ("starts with"), and ~~,~~*,!~~,!~~* ("like", "ilike") string-matching operators (#17251)
  • Support SQL SELECT * ILIKE wildcard syntax (#17169)
  • Support SQL temporal functions STRFTIME and STRPTIME, and typed literal syntax (#17245)
  • Support date/datetime for hive parts (#17256)
  • Implement binary serialization of LazyFrame/DataFrame/Expr and set it as the default format (#17223)
  • Allow no-op round/ceil/floor on integer types (#17241)
  • Support loading from datasets where the hive columns are also stored in the file (#17203)
  • Implement serde for Null columns (#17218)
  • Support Decimal types in write_csv/write_json (#14209)
  • Add optional "default" to get_column DataFrame method (#17176)
  • Improve SQL support for array indexing, increase test coverage (#16972)
  • Support reading byte stream split encoded floats and doubles in parquet (#17099)
  • Add float_scientific option to write_csv/sink_csv (#17111)
  • Support Struct field selection in the SQL engine, RENAME and REPLACE select wildcard options (#17109)
  • Update DataFrame.pivot to allow index=None when values is set (#17126)
  • Update read/scan_parquet to disable Hive partitioning by default for file inputs (#17106)
  • Improve ipython autocomplete for LazyFrame and DataFrame (#17091)
  • Split replace functionality into two separate methods (#16921)
  • Improve schema inference for hive partitions (#17079)
  • Rename DataFrame.melt to unpivot and make parameters consistent with pivot (#17095)
  • Print row index in explain and show_graph (#17074)
  • Support top-level pl.col autocompletion for iPython (#17080)
  • Remove re-export of exceptions at top-level (#17059)
  • Implement predicate and projection pushdown for read_ndjson (#17068)
  • Allow (non-)coalescing in join_asof (#17066)
  • Turn of coalescing and fix mutation of join on expressions (#17061)
  • Expand NDJson glob into one SCAN (#17063)
  • Do not parse hive partitions from user provided base directory path (#17055)
  • Support directory paths in scans for Parquet, IPC and CSV (#17017)
  • Implement general array equality checks (#17043)
  • Add strict parameter to DataFrame/LazyFrame.drop and fix behavior to default to True (#17044)
  • Rename ModuleUpgradeRequired and PolarsPanicError error, remove InvalidAssert error (#17033)
  • Add rechunk parameter to read_delta (#16991)
  • allow experimental metadata use on release (#17005)
  • Add simple version of json_normalize (#17015)
  • Change data orientation inference logic for DataFrame construction and warn when row orientation is inferred (#16976)
  • Desugar AND filter into multiple nodes (#16992)
  • Handle textio even if not correct (#16971)
  • Properly apply strict parameter in Series constructor (#16939)
  • Add SQL support for INTERSECT and EXCEPT ops (#16960)
  • Add PerformanceWarning to LazyFrame properties (#16964)
  • Add collect_schema method to LazyFrame and DataFrame (#16929)
  • Allow setting file cache TTL on a per-file basis (#16891)
  • Support Decimal inputs for lit (#16950)
  • Implement multiply and division for lhs duration (#16948)
  • Raise on invalid temporal arithmetic (#16934)
  • Always end with a in-memory sink on collect (#16928)
  • Add DataFrame.style namespace (#16809)
  • Add Schema class (#16873)
  • Normalize value_counts (#16917)
  • Implement equality for more Array types (#16902)
  • Set up some of the infrastructure for new streaming engine (#16900)
  • Cache downloaded cloud IPC files (#16892)
  • Consistently convert to given time zone in Series constructor (#16828)
  • Improve read_csv SQL table reading function defaults (better handle dates) (#16866)
  • Support SQL VALUES clause and inline renaming of columns in CTE & derived table definitions (#16851)
  • Support Python Enum values in lit (#16858)
  • Convert to given time zone in .str.to_datetime when values are offset-aware (#16742)
  • Update reshape to return Array types instead of List types (#16825)
  • Default to raising on out-of-bounds indices in all get/gather operations (#16841)
  • Support SQL "SELECT" with no tables, optimise registration of globals (#16836)
  • Native selector XOR set operation, guarantee consistent selector column-order (#16833)
  • Extend recognised EXTRACT and DATE_PART SQL part abbreviations (#167...
Read more