-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: pyarrow's csv reader yields different results than the deafult csv reader on strings that look like floating points #53269
Comments
Thanks for this, @Meehai. This looks like an issue with the pyarrow csv reader i.e. cannot be fixed in pandas. You could raise the issue on the arrow issue tracker. |
I've put an issue on their tracker as well: apache/arrow#35661 Thanks for the response. From a user point of view it's a pretty nasty bug, especially when you expect the same behavior. |
This is actually a bug in pandas (see also apache/arrow#35661 (comment)). Currently, the pandas/pandas/io/parsers/arrow_parser_wrapper.py Lines 150 to 155 in 1720520
But ideally we pass this information to the pyarrow csv reader ( So in this case the data is first read as floats, and afterwards cast to string, meaning we loose some of the original precision that was in the file. Instead we should directly read it as a string from the file (by instructing pyarrow to do so). |
To add some context to why the original implementation is the way it is: There is an issue where the integer location of the column cannot be used in place of the column name in general in pyarrow. (With the dtype dict in pandas, you can do something like Long term, though, pushing this down to pyarrow is the way to go. |
But shorter term, we could start with only deferring to pyarrow's column_types if the dict keys are strings? And only in the case of integers fall back to casting afterwards (or even raising an error that this is not yet implemented for the arrow engine) |
I forgot to mention that we still do processing of the Assuming pushing that through pyarrow works (there were failures last time), we should be able to mostly let pyarrow handle dtypes itself (like you said). |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
It seems that pyarrow's csv engine applies the dtype After they do some internal automatic detection, leading to
1225717802.1679841607
being first interpreted as float, then truncated, then reinterpreted as string.Expected Behavior
asserts shouldn't fail
Installed Versions
INSTALLED VERSIONS
commit : 37ea63d
python : 3.9.13.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-1030-gcp
Version : #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.0.1
numpy : 1.23.3
pytz : 2022.2.1
dateutil : 2.8.2
setuptools : 63.4.1
pip : 22.1.2
Cython : None
pytest : 7.2.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.5.0
pandas_datareader: None
bs4 : 4.11.1
bottleneck : None
brotli : None
fastparquet : 2023.2.0
fsspec : 2022.8.2
gcsfs : None
matplotlib : 3.6.0
numba : 0.56.4
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 12.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.9.1
snappy : None
sqlalchemy : 1.4.41
tables : None
tabulate : 0.8.10
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : 2.2.1
pyqt5 : None
The text was updated successfully, but these errors were encountered: