Closed
Description
Unlike Spark catalog table (Table is only required on the client/driver side), Flink needs obtain Table
object in Job Manager or Task.
- For writer (Flink: Add the iceberg files committer to collect data files and commit to iceberg table. #1185): Flink needs obtain
Table
in committer task for appending files. - For reader (Flink: Implement Flink InputFormat and integrate it to FlinkCatalog #1293): Flink needs obtain
Table
in Job Manager for planing tasks.
So we can introduce a CatalogLoader
for reader and writer, users can define a custom catalog loader in FlinkCatalogFactory
.
public interface CatalogLoader extends Serializable {
Catalog loadCatalog(Configuration hadoopConf);
}
Metadata
Metadata
Assignees
Labels
No labels