Description
Feature Request / Improvement
Currently, if the insert statement specifies less columns than the target table size, the following exception will be thrown:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot find column 'col_1' of the target table among the INSERT columns: col_2, col_3. INSERT clauses must provide values for all columns of the target table.
For a wide table that has 1000 columns, the user is required to specify all the columns with default values null to avoid this exception. Can we support partial insert in merge into (default to null if not specified) command so the developer can maintain clean SQL statement.
E.g. Delta merge command has already supported this
https://docs.databricks.com/sql/language-manual/delta-merge-into.html
Query engine
Spark