-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flink Support for TIMESTAMP_NANOS #11348
Conversation
Do the schema conversions handle the nano ts correctly? Do we get a |
0fb23b1
to
d54daa9
Compare
👍🏻 |
case TIMESTAMP_NANO: | ||
Types.TimestampNanoType timestampNano = (Types.TimestampNanoType) primitive; | ||
if (timestampNano.shouldAdjustToUTC()) { | ||
// NANOS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be consistent with current comments already present for MICROS
, ie:
if (timestamp.shouldAdjustToUTC()) {
// MICROS
return new LocalZonedTimestampType(6);
} else {
// MICROS
return new TimestampType(6);
}
// NANOS | ||
return new LocalZonedTimestampType(9); | ||
} else { | ||
// NANOS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pls. remove the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please see above
@Override | ||
public TimestampData read(TimestampData ignored) { | ||
long value = readLong(); | ||
return TimestampData.fromInstant(Instant.ofEpochSecond(0, value)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How could we cross check, that the Spark write/Flink read and Flink write/Spark read is correctly working?
Are we sure that this is the expected data format?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the generic java writer tests?
|
||
LocalDateTime localDateTime = (LocalDateTime) r.get(0); | ||
minValues.merge( | ||
"timestamp_column", localDateTime.toInstant(ZoneOffset.UTC).toEpochMilli(), Math::min); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only comparing millis - shall we do a testing with nanos?
There are several tests which are using |
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time. |
Supports the new TIMESTAMP_NANOS (v3 format) for Flink, including watermark generation