Closed as not planned
Description
While updating iceberg flink runtime version from 0.11.0 to 0.12.1, I am getting the following error
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: schemas must exist in format v2
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: java.lang.IllegalArgumentException: schemas must exist in format v2
at org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkArgument(Preconditions.java:413)
at org.apache.iceberg.TableMetadataParser.fromJson(TableMetadataParser.java:310)
at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:258)
at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:252)
at org.apache.iceberg.BaseMetastoreTableOperations.lambda$refreshFromMetadataLocation$0(BaseMetastoreTableOperations.java:179)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:405)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190)
at org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:178)
at org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:160)
at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:200)
at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:94)
at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:77)
at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:93)
at org.apache.iceberg.catalog.Catalog.tableExists(Catalog.java:270)
at cdc.FlinkDebezium.getIcebergV2Table(FlinkDebezium.java:171)
at cdc.FlinkDebezium.run(FlinkDebezium.java:119)
at cdc.FlinkDebezium.build(FlinkDebezium.java:83)
at cdc.FlinkDebezium.main(FlinkDebezium.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
My iceberg tables were created in v2 format using the following snippet
org.apache.iceberg.Table icebergTable = catalogLoader.loadCatalog().createTable(tableIdentifier,
schema,
partitionSpec,
tableProperties);
// need to upgrade version to 2,otherwise 'java.lang.IllegalArgumentException: Cannot write
// delete files in a v1 table'
TableOperations tableOperations = ((BaseTable) icebergTable).operations();
TableMetadata metadata = tableOperations.current();
tableOperations.commit(metadata, metadata.upgradeToFormatVersion(2));
Metadata
Metadata
Assignees
Type
Projects
Status
Done
Activity