-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API: timestamp resolution inference - default to one unit (if possible) instead of being data-dependent? #58989
Comments
cc @pandas-dev/pandas-core |
This sounds reasonable and I think could help simplify the implementation |
I think this would be an improvement. Seems like a good idea that anyone working with human times (say down to second precision) with a range of the modern era would get the same basis for the timestamp. |
I don't think it will simplify things generally, because we still need the current inference logic when the default unit does not fit, but from looking a bit into it, I also don't think it should make the code much more complex. |
take |
Based on discussions, I will update |
@jorisvandenbossche would updating the |
@Pranav-Wadhwa nanoseconds is what we used previously, so I don't think we want to go back to that. The OP suggests microseconds as a default resolution, although I'm not sure its as simple as changing the to_datetime signature either. Before diving into the details I think should get some more agreement from the pandas core team. @jbrockmendel is our datetime guru so let's see if he has any thoughts first |
I’m fine with OP suggestion as long as we are internally consistent, I.e. Timestamp constructor |
@jbrockmendel what do you mean by timestamp constructor? If we set the default value of |
I would prefer going even further: don't automatically fallback if @Pranav-Wadhwa - I believe there are many places this would need to change in pandas to be consistent beyond just |
I believe the scope of this work is outside my knowledge as this is my first issues with pandas. If it's helpful to future assignees, I found that the |
After #55901, we now do inference of the best resolution, and so allow to create non-nanosecond data by default (instead of raising for out of bounds data).
To be clear, it is a very nice improvement to stop raising those OutOfBounds errors while the timestamp would perfectly fit in another resolution. But I do think we could maybe reconsider the exact logic of how to determine the resolution.
With the latest changes you get the following:
The resulting dtype instance depends on the exact input value (not type). I do think this has some downsides:
The fact that pandas by default truncates the string repr of datetimes (i.e. we don't show the subsecond parts if they are all zero, regardless of the actual resolution), in contrast to numpy, also means that round-tripping through a text representation (eg CSV) will very often lead to a change in dtype.
As a potential alternative, we could also decide to have a fixed default resolution (e.g. microseconds), and then the logic for inferring the resolution could be: try to use the default resolution, and only if that does not work (either out of bounds or too much precision, i.e. nanoseconds present), use the inferred resolution from the data.
That still gives some values dependent behaviour, but I think this would make it a lot less common to see. And using a resolution like microseconds is sufficient for by far most use cases (in terms of bounds it supports: [290301 BC, 294241 AD])
The text was updated successfully, but these errors were encountered: