You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
importpandasaspdfrompathlibimportPathimportshutilimportpyarrow.parquetaspqfrompyarrowimportunify_schemasfrompyarrow.libimportArrowNotImplementedErrordata= {"funny_col": [None] *200_000}
data["funny_col"][100200] ="funny"# one single non null value in the 2nd batchdf=pd.DataFrame(data)
df_split= [df.iloc[0:100_000], df.iloc[100_000:200_000]]
shutil.rmtree("funny.parquet", ignore_errors=True)
Path("funny.parquet").mkdir(exist_ok=False)
df_split[0].to_parquet("funny.parquet/funny-00.parquet")
df_split[1].to_parquet("funny.parquet/funny-01.parquet")
try:
df2=pd.read_parquet("funny.parquet")
exceptArrowNotImplementedError:
pass# workaroundschemas= [pq.ParquetFile(f).schema_arrowforfinPath("funny.parquet").iterdir()]
schemas=unify_schemas(schemas)
df3=pq.ParquetDataset("funny.parquet", schema=schemas).read().to_pandas()
assertdf.equals(df3)
Issue Description
I think this is the reason:
If you have a batched dataset (in this case 2 batches) and the first one is full of Nones (nulls), while the 2nd one is not, and the data type is based on the 2nd one, pandas will not use 'unify_schemas' but rather set the dtypes based on the first batch.
Then, when it reads the 2nd one, it'll try to cast to null and raise exception.
Expected Behavior
I added a workaround, but you have to call unify_schemas from pyarrow manually.
Installed Versions
INSTALLED VERSIONS
commit : 965ceca
python : 3.10.12.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-34-generic
Version : #34~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 13:12:03 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Alternatively, if you can avoid object dtype it should help. In the example the dataframe is being manually split and written to parquet. Since the column is object dtype inference is done to try and assign the proper type. The first split has all nulls so it gets assigned a null type. The second split has one string so it gets assigned a string type.
Agreed, if you know the dtype ahead of time. In my case, this bug was triggered by a process that gives many chunks/batches. The original data source knows the dtype, but I only get chunks and I need a way to unify them on the fly.
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
I think this is the reason:
If you have a batched dataset (in this case 2 batches) and the first one is full of Nones (nulls), while the 2nd one is not, and the data type is based on the 2nd one, pandas will not use 'unify_schemas' but rather set the dtypes based on the first batch.
Then, when it reads the 2nd one, it'll try to cast to null and raise exception.
Expected Behavior
I added a workaround, but you have to call unify_schemas from pyarrow manually.
Installed Versions
INSTALLED VERSIONS
commit : 965ceca
python : 3.10.12.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-34-generic
Version : #34~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 13:12:03 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.0.2
numpy : 1.24.1
pytz : 2023.3
dateutil : 2.8.2
setuptools : 67.6.0
pip : 23.1.2
Cython : None
pytest : 7.2.2
hypothesis : None
sphinx : 6.2.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.1
pymysql : None
psycopg2 : 2.9.9
jinja2 : 3.1.2
IPython : 8.11.0
pandas_datareader: None
bs4 : 4.12.0
bottleneck : None
brotli : 1.0.9
fastparquet : None
fsspec : 2023.6.0
gcsfs : 2023.6.0
matplotlib : 3.7.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 11.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : 1.4.46
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: