Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: pd.read_parquet with first batch full of Nones leads to exception #55731

Open
2 of 3 tasks
Meehai opened this issue Oct 27, 2023 · 3 comments
Open
2 of 3 tasks

BUG: pd.read_parquet with first batch full of Nones leads to exception #55731

Meehai opened this issue Oct 27, 2023 · 3 comments
Labels
Bug IO Parquet parquet, feather

Comments

@Meehai
Copy link

Meehai commented Oct 27, 2023

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
from pathlib import Path
import shutil
import pyarrow.parquet as pq
from pyarrow import unify_schemas
from pyarrow.lib import ArrowNotImplementedError

data = {"funny_col": [None] * 200_000}
data["funny_col"][100200] = "funny" # one single non null value in the 2nd batch
df = pd.DataFrame(data)

df_split = [df.iloc[0:100_000], df.iloc[100_000:200_000]]
shutil.rmtree("funny.parquet", ignore_errors=True)
Path("funny.parquet").mkdir(exist_ok=False)
df_split[0].to_parquet("funny.parquet/funny-00.parquet")
df_split[1].to_parquet("funny.parquet/funny-01.parquet")

try:
    df2 = pd.read_parquet("funny.parquet")
except ArrowNotImplementedError:
    pass

# workaround
schemas = [pq.ParquetFile(f).schema_arrow for f in Path("funny.parquet").iterdir()]
schemas = unify_schemas(schemas)
df3 = pq.ParquetDataset("funny.parquet", schema=schemas).read().to_pandas()

assert df.equals(df3)

Issue Description

I think this is the reason:

If you have a batched dataset (in this case 2 batches) and the first one is full of Nones (nulls), while the 2nd one is not, and the data type is based on the 2nd one, pandas will not use 'unify_schemas' but rather set the dtypes based on the first batch.

Then, when it reads the 2nd one, it'll try to cast to null and raise exception.

Expected Behavior

I added a workaround, but you have to call unify_schemas from pyarrow manually.

Installed Versions

INSTALLED VERSIONS

commit : 965ceca
python : 3.10.12.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-34-generic
Version : #34~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 13:12:03 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 2.0.2
numpy : 1.24.1
pytz : 2023.3
dateutil : 2.8.2
setuptools : 67.6.0
pip : 23.1.2
Cython : None
pytest : 7.2.2
hypothesis : None
sphinx : 6.2.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.1
pymysql : None
psycopg2 : 2.9.9
jinja2 : 3.1.2
IPython : 8.11.0
pandas_datareader: None
bs4 : 4.12.0
bottleneck : None
brotli : 1.0.9
fastparquet : None
fsspec : 2023.6.0
gcsfs : 2023.6.0
matplotlib : 3.7.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 11.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : 1.4.46
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None

@Meehai Meehai added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 27, 2023
@paulreece paulreece added IO Parquet parquet, feather and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 28, 2023
@paulreece
Copy link
Contributor

Confirmed on main:

>>> df2 = pd.read_parquet("funny.parquet")
Traceback (most recent call last):
...
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from string to null using function cast_null

@lukemanley
Copy link
Member

Alternatively, if you can avoid object dtype it should help. In the example the dataframe is being manually split and written to parquet. Since the column is object dtype inference is done to try and assign the proper type. The first split has all nulls so it gets assigned a null type. The second split has one string so it gets assigned a string type.

If you change this line:

df = pd.DataFrame(data)

to:

df = pd.DataFrame(data, dtype="string[pyarrow]")

It does not require the unify schemas workaround.

@Meehai
Copy link
Author

Meehai commented Oct 29, 2023

Agreed, if you know the dtype ahead of time. In my case, this bug was triggered by a process that gives many chunks/batches. The original data source knows the dtype, but I only get chunks and I need a way to unify them on the fly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug IO Parquet parquet, feather
Projects
None yet
Development

No branches or pull requests

3 participants