This destination syncs data to Delta Lake on Databricks Lakehouse. It does so in two steps:
- Persist source data in S3 staging files in the Parquet format, or in Azure blob storage staging files in the CSV format.
- Create delta table based on the staging files.
See this link for the nuances about the connector.