-
Notifications
You must be signed in to change notification settings - Fork 3.9k
GH-37118 [Java][arrow-jdbc] Support converting JDBC TIMESTAMP_WITH_TIMEZONE to Arrow #37088
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to add a unit test? There are tests that use H2 (to avoid spinning up a full DB). That said what I think Arrow needs to do is integration test with actual databases. (I am considering just vendoring this code into ADBC which is already set up to do that and eliminating the Calendar jank here...)
| } | ||
| return new ArrowType.Timestamp(TimeUnit.MILLISECOND); | ||
| case Types.TIMESTAMP_WITH_TIMEZONE: | ||
| final String timezone = calendar == null ? null : calendar.getTimeZone().getID(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, there is still an underlying timezone, right? I guess we just don't know without writing database-specific code. It might be better to error in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this one is tough. As I understand it, TIMESTAMP_WITH_TIMEZONE means that each record includes its own timezone as a UTC offset (or at least, that seems to be the case with Snowflake). Based on the JDBC spec, ResultSet.getTimestamp(int columnIndex, Calendar cal) does the following This method uses the given calendar to construct an appropriate millisecond value for the timestamp if the underlying database does not store timezone information.. This leads me to believe that it parses out the offset from the underlying db and uses that to convert to UTC. If no offset is available, it assumes the timestamp is in w/e is passed in the calendar and then converts it to UTC. Here's an interesting SO on the topic: https://stackoverflow.com/a/63078938/1815486
From my understanding, ArrowType.Timestamp does not support per record timezones and it does not do any conversions on its own. It seems like the timezone is associated with the Vector itself and all records are expected to be in that same timezone. This means we must convert TIMESTAMP_WITH_TIMEZONE values into one specific TZ before we add them to the vector. It's also not exactly clear what it means when the TZ is null (I'm assuming this means it's just a "wall clock" time).
This all leaves some questions:
- What is the TZ associated with
calendarsupposed to represent for these functions?- The TZ to assume the underlying values are in
- The TZ to report on the ArrowType
- The TZ that we should convert values to
There's some important connotations because it depends on whether one is using the arrow-jdbc library to convert values server side, where it's generally safe to use/assume UTC for a lot of things, or client side, where they might want to express values in the user's timezone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Arrow, the underlying representation for a timestamp with timezone is always UTC. So if the driver is giving us UTC and converting it, we should always bypass their conversion and construct a timestamp[ms, UTC].
And yes, if there's no timezone, then it's a wall-clock time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so Snowflake:
TIMESTAMP_LTZ internally stores UTC time with a specified precision. However, all operations are performed in the current session’s time zone, controlled by the TIMEZONE session parameter.
TIMESTAMP_TZ internally stores UTC time together with an associated time zone offset. When a time zone is not provided, the session time zone offset is used. All operations are performed with the time zone offset specific to each record.
Postgres:
All timezone-aware dates and times are stored internally in UTC.
So the underlying value should be UTC. I think you'll have to look at what their JDBC driver specifically does, though: it might localize the value for you. I think you may need database-specific converters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm maybe we can change the consumer to do something smarter than getTimestamp. This SO suggests OffsetDateTime odt = myResultSet.getObject( … , OffsetDateTime.class ) ; .
So for TZ aware timestamps we could use ^ and if no offset is available assume UTC. I think that would work for most things, but it'd get tricky for something like TIMESTAMP_LTZ. I wonder how that even comes across through the JDBC driver? I imagine it's still of type TIMESTAMP_WITH_TIMEZONE just like TIMESTAMP_TZ. We might want to add the value from rsmd.getColumnTypeName to the JdbcFieldInfo as well so users can disambiguate between the two if they need to write a custom converter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran into the same issue: #35916
So agreed, we need more info than what JdbcFieldInfo has. (ADBC's Java adapter already has a workaround to add the extra fields.) And yeah, I think trying to get OffsetDateTime or similar would be better.
That's part of why I want to vendor this into the ADBC driver, iterate on it + test it against actual databases, and then maybe send the changes back...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool makes sense. I'll try to modify the consumer to do something smarter and correctly convert values into UTC.
I'll make a separate PR to add more data into JdbcFieldInfo.
WRT I want to vendor this into the ADBC driver... and then maybe send the changes back you mean redesign the JDBC to Arrow conversion within ADBC and eventually pull that back out into the arrow-jdbc package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Though realistically, I don't have the time to do that right now so thank you for fixing up these things 😅
|
|
|
|
Rationale for this change
We want to support converting
TIMESTAMP_WITH_TIMEZONEJDBC fields into Arrow.What changes are included in this PR?
Convert
TIMESTAMP_WITH_TIMEZONEfields into Arrow, and include the provided timezone information.Ensure timezone information is not included in arrow object for regular
TIMESTAMPfields.Are there any user-facing changes?
Potentially, depending on how users are configuring the calendar.
This might also be related to #36519 since we need to ensure handling of timestamps is consistent when converting either jdbc -> arrow or arrow -> jdbc.