Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark3 structured streaming micro_batch read support #2660

Merged
merged 40 commits into from
Jun 25, 2021
Merged
Show file tree
Hide file tree
Changes from 37 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
41041f3
Spark3 structured streaming micro_batch read support
SreeramGarlapati Jun 2, 2021
51c9e87
integrate with spark3 checkpointing.
SreeramGarlapati Jun 4, 2021
1b7dbb0
remove guava dependency
SreeramGarlapati Jun 7, 2021
5a1020f
Merge branch 'master' of https://github.com/apache/iceberg into spark…
SreeramGarlapati Jun 7, 2021
6f57b55
fix NPE in SnapshotUtil
SreeramGarlapati Jun 8, 2021
6fe5657
rename OffsetLog to InitialOffsetStore
SreeramGarlapati Jun 8, 2021
15efe95
optimization: use snapshot summary to get number of added files inste…
SreeramGarlapati Jun 8, 2021
9c9b4de
handle the case when read stream on iceberg table source restarts fro…
SreeramGarlapati Jun 8, 2021
b4acade
refactor test code
SreeramGarlapati Jun 8, 2021
cab843b
mark initialOffset final
SreeramGarlapati Jun 8, 2021
5e59082
Merge branch 'master' of https://github.com/apache/iceberg into spark…
SreeramGarlapati Jun 10, 2021
fa85859
refresh table metadata before computing offsets.
SreeramGarlapati Jun 10, 2021
96aaa22
remove dependency on HDFSMetadataLog for checkpointing
SreeramGarlapati Jun 10, 2021
f286cee
reduce SparkScan.ReaderFactory visibility from public to package-private
SreeramGarlapati Jun 11, 2021
2ac1269
fix latestOffset value when latestOffset is the startingOffset and st…
SreeramGarlapati Jun 11, 2021
7ed9781
rename getFilesScanTasks to calculateFilesScanTasks
SreeramGarlapati Jun 11, 2021
6ee958c
remove unused instance variable - spark
SreeramGarlapati Jun 11, 2021
c101a9e
refactor SparkMicroBatchStream constructor
SreeramGarlapati Jun 11, 2021
ce16a76
remove unused log variable.
SreeramGarlapati Jun 11, 2021
baa03d6
remove unused imports.
SreeramGarlapati Jun 11, 2021
78e5bdb
replace the optimization to use snapshot.summary with data file enume…
SreeramGarlapati Jun 12, 2021
00fe477
checkstyle: remove unused imports
SreeramGarlapati Jun 12, 2021
cb4f200
confirm to codebase error message fmt's
SreeramGarlapati Jun 16, 2021
33bf3c5
refactor initialOffsetStore
SreeramGarlapati Jun 16, 2021
353f274
remove dependency on Hadoop's path - use SLASH instead
SreeramGarlapati Jun 16, 2021
f22a587
change the default behavior scanning the first snapshot of spark3 str…
SreeramGarlapati Jun 21, 2021
75e8430
Merge branch 'master' of https://github.com/apache/iceberg into spark…
SreeramGarlapati Jun 21, 2021
3c60ef8
rename variable in SnapshotUtil.snapshotAfter from pointer to current
SreeramGarlapati Jun 21, 2021
d671d5e
fix the javadoc of methods added to SnapshotUtil
SreeramGarlapati Jun 21, 2021
344ed1f
fix the javadoc of methods added to SnapshotUtil
SreeramGarlapati Jun 21, 2021
f33b26e
fix the javadoc of methods added to SnapshotUtil
SreeramGarlapati Jun 21, 2021
9b2fd4e
refactor SnapshotUtil.snapshotAfter
SreeramGarlapati Jun 22, 2021
cc7c887
Add streaming read tests for Catalog based tables
daksha121 Jun 24, 2021
da2528b
checkstyle
SreeramGarlapati Jun 24, 2021
fadd15c
replace usage of Streams with Iterables
SreeramGarlapati Jun 24, 2021
8bb4048
add comment to explain how streaming stop and resume is simulated
SreeramGarlapati Jun 24, 2021
12c675a
revert the usage of 'delete from' statement to the original appraoach…
SreeramGarlapati Jun 24, 2021
507f7d8
remove redundant creation of sparkSession object
SreeramGarlapati Jun 25, 2021
2baebb1
Remove unnecessary rules from tests
rdblue Jun 25, 2021
7467c0c
Remove unused imports.
rdblue Jun 25, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions core/src/main/java/org/apache/iceberg/util/SnapshotUtil.java
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
import org.apache.iceberg.Snapshot;
import org.apache.iceberg.Table;
import org.apache.iceberg.exceptions.ValidationException;
import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
import org.apache.iceberg.relocated.com.google.common.collect.Lists;

Expand Down Expand Up @@ -63,6 +64,19 @@ public static List<Long> currentAncestors(Table table) {
return ancestorIds(table.currentSnapshot(), table::snapshot);
}

/**
* Traverses the history of the table's current snapshot and finds the oldest Snapshot.
* @return null if there is no current snapshot in the table, else the oldest Snapshot.
*/
public static Snapshot oldestSnapshot(Table table) {
Snapshot current = table.currentSnapshot();
while (current != null && current.parentId() != null) {
current = table.snapshot(current.parentId());
}

return current;
}

/**
* Returns list of snapshot ids in the range - (fromSnapshotId, toSnapshotId]
* <p>
Expand Down Expand Up @@ -107,4 +121,28 @@ public static List<DataFile> newFiles(Long baseSnapshotId, long latestSnapshotId

return newFiles;
}

/**
* Traverses the history of the table's current snapshot and finds the snapshot with the given snapshot id as its
* parent.
* @return null if the passed in snapshot is not present in the table, else the snapshot for which the given snapshot
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
* is the parent
* @throws IllegalArgumentException when the given snapshotId is not found in the table
* @throws IllegalStateException when the given snapshotId is not an ancestor of the current table state
*/
public static Snapshot snapshotAfter(Table table, long snapshotId) {
Preconditions.checkArgument(table.snapshot(snapshotId) != null, "Cannot find parent snapshot: %s", snapshotId);

Snapshot current = table.currentSnapshot();
while (current != null) {
if (current.parentId() == snapshotId) {
return current;
}

current = table.snapshot(current.parentId());
}

throw new IllegalStateException(
String.format("Cannot find snapshot after %s: not an ancestor of table's current snapshot", snapshotId));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@
import org.apache.spark.sql.connector.read.Scan;
import org.apache.spark.sql.connector.read.Statistics;
import org.apache.spark.sql.connector.read.SupportsReportStatistics;
import org.apache.spark.sql.connector.read.streaming.MicroBatchStream;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;
import org.apache.spark.sql.vectorized.ColumnarBatch;
Expand Down Expand Up @@ -108,6 +109,12 @@ public Batch toBatch() {
return this;
}

@Override
public MicroBatchStream toMicroBatchStream(String checkpointLocation) {
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
return new SparkMicroBatchStream(
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
sparkContext, table, caseSensitive, expectedSchema, options, checkpointLocation);
}

@Override
public StructType readSchema() {
if (readSchema == null) {
Expand Down Expand Up @@ -213,10 +220,10 @@ public String description() {
return String.format("%s [filters=%s]", table, filters);
}

private static class ReaderFactory implements PartitionReaderFactory {
static class ReaderFactory implements PartitionReaderFactory {
private final int batchSize;

private ReaderFactory(int batchSize) {
ReaderFactory(int batchSize) {
this.batchSize = batchSize;
}

Expand Down Expand Up @@ -256,7 +263,7 @@ private static class BatchReader extends BatchDataReader implements PartitionRea
}
}

private static class ReadTask implements InputPartition, Serializable {
static class ReadTask implements InputPartition, Serializable {
private final CombinedScanTask task;
private final Broadcast<Table> tableBroadcast;
private final String expectedSchemaString;
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,251 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package org.apache.iceberg.spark.source;

import java.io.BufferedWriter;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.UncheckedIOException;
import java.nio.charset.StandardCharsets;
import java.util.List;
import org.apache.iceberg.CombinedScanTask;
import org.apache.iceberg.DataOperations;
import org.apache.iceberg.FileScanTask;
import org.apache.iceberg.MicroBatches;
import org.apache.iceberg.MicroBatches.MicroBatch;
import org.apache.iceberg.Schema;
import org.apache.iceberg.SchemaParser;
import org.apache.iceberg.SerializableTable;
import org.apache.iceberg.Snapshot;
import org.apache.iceberg.Table;
import org.apache.iceberg.io.CloseableIterable;
import org.apache.iceberg.io.FileIO;
import org.apache.iceberg.io.InputFile;
import org.apache.iceberg.io.OutputFile;
import org.apache.iceberg.relocated.com.google.common.base.Joiner;
import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
import org.apache.iceberg.relocated.com.google.common.collect.Lists;
import org.apache.iceberg.spark.Spark3Util;
import org.apache.iceberg.spark.SparkReadOptions;
import org.apache.iceberg.spark.source.SparkBatchScan.ReadTask;
import org.apache.iceberg.spark.source.SparkBatchScan.ReaderFactory;
import org.apache.iceberg.util.PropertyUtil;
import org.apache.iceberg.util.SnapshotUtil;
import org.apache.iceberg.util.TableScanUtil;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.broadcast.Broadcast;
import org.apache.spark.sql.connector.read.InputPartition;
import org.apache.spark.sql.connector.read.PartitionReaderFactory;
import org.apache.spark.sql.connector.read.streaming.MicroBatchStream;
import org.apache.spark.sql.connector.read.streaming.Offset;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;

import static org.apache.iceberg.TableProperties.SPLIT_LOOKBACK;
import static org.apache.iceberg.TableProperties.SPLIT_LOOKBACK_DEFAULT;
import static org.apache.iceberg.TableProperties.SPLIT_OPEN_FILE_COST;
import static org.apache.iceberg.TableProperties.SPLIT_OPEN_FILE_COST_DEFAULT;
import static org.apache.iceberg.TableProperties.SPLIT_SIZE;
import static org.apache.iceberg.TableProperties.SPLIT_SIZE_DEFAULT;

public class SparkMicroBatchStream implements MicroBatchStream {
private static final Joiner SLASH = Joiner.on("/");

private final Table table;
private final boolean caseSensitive;
private final String expectedSchema;
private final Broadcast<Table> tableBroadcast;
private final Long splitSize;
private final Integer splitLookback;
private final Long splitOpenFileCost;
private final boolean localityPreferred;
private final StreamingOffset initialOffset;

SparkMicroBatchStream(JavaSparkContext sparkContext, Table table, boolean caseSensitive,
Schema expectedSchema, CaseInsensitiveStringMap options, String checkpointLocation) {
this.table = table;
this.caseSensitive = caseSensitive;
this.expectedSchema = SchemaParser.toJson(expectedSchema);
this.localityPreferred = Spark3Util.isLocalityEnabled(table.io(), table.location(), options);
this.tableBroadcast = sparkContext.broadcast(SerializableTable.copyOf(table));

long tableSplitSize = PropertyUtil.propertyAsLong(table.properties(), SPLIT_SIZE, SPLIT_SIZE_DEFAULT);
this.splitSize = Spark3Util.propertyAsLong(options, SparkReadOptions.SPLIT_SIZE, tableSplitSize);

int tableSplitLookback = PropertyUtil.propertyAsInt(table.properties(), SPLIT_LOOKBACK, SPLIT_LOOKBACK_DEFAULT);
this.splitLookback = Spark3Util.propertyAsInt(options, SparkReadOptions.LOOKBACK, tableSplitLookback);

long tableSplitOpenFileCost = PropertyUtil.propertyAsLong(
table.properties(), SPLIT_OPEN_FILE_COST, SPLIT_OPEN_FILE_COST_DEFAULT);
this.splitOpenFileCost = Spark3Util.propertyAsLong(options, SPLIT_OPEN_FILE_COST, tableSplitOpenFileCost);

InitialOffsetStore initialOffsetStore = new InitialOffsetStore(table, checkpointLocation);
this.initialOffset = initialOffsetStore.initialOffset();
}

@Override
public Offset latestOffset() {
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
table.refresh();
Snapshot latestSnapshot = table.currentSnapshot();
if (latestSnapshot == null) {
return StreamingOffset.START_OFFSET;
}

return new StreamingOffset(latestSnapshot.snapshotId(), Iterables.size(latestSnapshot.addedFiles()), false);
}

@Override
public InputPartition[] planInputPartitions(Offset start, Offset end) {
Preconditions.checkArgument(end instanceof StreamingOffset, "Invalid end offset: %s is not a StreamingOffset", end);
Preconditions.checkArgument(
start instanceof StreamingOffset, "Invalid start offset: %s is not a StreamingOffset", start);

if (end.equals(StreamingOffset.START_OFFSET)) {
return new InputPartition[0];
}

StreamingOffset endOffset = (StreamingOffset) end;
StreamingOffset startOffset = (StreamingOffset) start;

List<FileScanTask> fileScanTasks = planFiles(startOffset, endOffset);

CloseableIterable<FileScanTask> splitTasks = TableScanUtil.splitFiles(
CloseableIterable.withNoopClose(fileScanTasks),
splitSize);
List<CombinedScanTask> combinedScanTasks = Lists.newArrayList(
TableScanUtil.planTasks(splitTasks, splitSize, splitLookback, splitOpenFileCost));
InputPartition[] readTasks = new InputPartition[combinedScanTasks.size()];

for (int i = 0; i < combinedScanTasks.size(); i++) {
readTasks[i] = new ReadTask(
combinedScanTasks.get(i), tableBroadcast, expectedSchema,
caseSensitive, localityPreferred);
}

return readTasks;
}

@Override
public PartitionReaderFactory createReaderFactory() {
return new ReaderFactory(0);
}

@Override
public Offset initialOffset() {
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
return initialOffset;
}

@Override
public Offset deserializeOffset(String json) {
return StreamingOffset.fromJson(json);
}

@Override
public void commit(Offset end) {
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved
}

@Override
public void stop() {
}

private List<FileScanTask> planFiles(StreamingOffset startOffset, StreamingOffset endOffset) {
List<FileScanTask> fileScanTasks = Lists.newArrayList();
MicroBatch latestMicroBatch = null;
StreamingOffset batchStartOffset = StreamingOffset.START_OFFSET.equals(startOffset) ?
new StreamingOffset(SnapshotUtil.oldestSnapshot(table).snapshotId(), 0, false) :
startOffset;

do {
StreamingOffset currentOffset =
latestMicroBatch != null && latestMicroBatch.lastIndexOfSnapshot() ?
new StreamingOffset(snapshotAfter(latestMicroBatch.snapshotId()), 0L, false) :
batchStartOffset;
SreeramGarlapati marked this conversation as resolved.
Show resolved Hide resolved

latestMicroBatch = MicroBatches.from(table.snapshot(currentOffset.snapshotId()), table.io())
.caseSensitive(caseSensitive)
.specsById(table.specs())
.generate(currentOffset.position(), Long.MAX_VALUE, currentOffset.shouldScanAllFiles());

fileScanTasks.addAll(latestMicroBatch.tasks());
} while (latestMicroBatch.snapshotId() != endOffset.snapshotId());

return fileScanTasks;
}

private long snapshotAfter(long snapshotId) {
Snapshot snapshotAfter = SnapshotUtil.snapshotAfter(table, snapshotId);

Preconditions.checkState(snapshotAfter.operation().equals(DataOperations.APPEND),
"Invalid Snapshot operation: %s, only APPEND is allowed.", snapshotAfter.operation());

return snapshotAfter.snapshotId();
}

private static class InitialOffsetStore {
private final Table table;
private final FileIO io;
private final String initialOffsetLocation;

InitialOffsetStore(Table table, String checkpointLocation) {
this.table = table;
this.io = table.io();
this.initialOffsetLocation = SLASH.join(checkpointLocation, "offsets/0");
}

public StreamingOffset initialOffset() {
InputFile inputFile = io.newInputFile(initialOffsetLocation);
if (inputFile.exists()) {
return readOffset(inputFile);
}

table.refresh();
StreamingOffset offset = table.currentSnapshot() == null ?
StreamingOffset.START_OFFSET :
new StreamingOffset(SnapshotUtil.oldestSnapshot(table).snapshotId(), 0, false);

OutputFile outputFile = io.newOutputFile(initialOffsetLocation);
writeOffset(offset, outputFile);

return offset;
}

private void writeOffset(StreamingOffset offset, OutputFile file) {
try (OutputStream outputStream = file.create()) {
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(outputStream, StandardCharsets.UTF_8));
writer.write(offset.json());
writer.flush();
} catch (IOException ioException) {
throw new UncheckedIOException(
String.format("Failed writing offset to: %s", initialOffsetLocation), ioException);
}
}

private StreamingOffset readOffset(InputFile file) {
try (InputStream in = file.newStream()) {
return StreamingOffset.fromJson(in);
} catch (IOException ioException) {
throw new UncheckedIOException(
String.format("Failed reading offset from: %s", initialOffsetLocation), ioException);
}
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ public class SparkTable implements org.apache.spark.sql.connector.catalog.Table,
private static final Set<TableCapability> CAPABILITIES = ImmutableSet.of(
TableCapability.BATCH_READ,
TableCapability.BATCH_WRITE,
TableCapability.MICRO_BATCH_READ,
TableCapability.STREAMING_WRITE,
TableCapability.OVERWRITE_BY_FILTER,
TableCapability.OVERWRITE_DYNAMIC);
Expand Down
Loading