Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] support create iceberg table #21378

Merged
merged 3 commits into from
Apr 12, 2023

Conversation

stephen-shelby
Copy link
Contributor

@stephen-shelby stephen-shelby commented Apr 11, 2023

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Which issues of this PR fixes :

Fixes #
Support create iceberg table under iceberg catalog. This table will sync to the user meta service. The Spark/hive can also query it.
usage:

  • create external table iceberg_catalog.iceberg_db.iceberg_table (c1 int, c2 int);
  • create external table iceberg_db.iceberg_table (c1 int, c2 int); // current context catalog is iceberg catalog
  • create external table iceberg_table (c1 int, c2 int); // current context catalog and db are both iceberg.
  • create external table iceberg_table (c1 int, c2 int ) engine = iceberg;
  • create external table iceberg_table (c1 int, c2 int, c3 int ) engine = iceberg partition by (c2, c3);
  • create external table iceberg_table (c1 int, c2 int ) partition by (c2) properties ("location"="hdfs://hadoop:9000/user/warehouse/iceberg_db.db/iceberg_table", "file_format"="parquet");

#21502

Problem Summary(Required) :

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr will affect users' behaviors
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function

Bugfix cherry-pick branch check:

  • I have checked the version labels which the pr will be auto backported to target branch
    • 3.0
    • 2.5
    • 2.4
    • 2.3

Signed-off-by: stephen <stephen5217@163.com>
@@ -1966,6 +1966,7 @@ optimizerTrace
partitionDesc
: PARTITION BY RANGE identifierList '(' (rangePartitionDesc (',' rangePartitionDesc)*)? ')'
| PARTITION BY LIST identifierList '(' (listPartitionDesc (',' listPartitionDesc)*)? ')'
| PARTITION BY identifierList
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this grammar can cover hive/hudi/iceberg/deltalake creating table

@sonarqubecloud
Copy link

SonarCloud Quality Gate failed.    Quality Gate failed

Bug C 6 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 18 Code Smells

0.0% 0.0% Coverage
0.0% 0.0% Duplication

@stephen-shelby stephen-shelby merged commit c2b8c4f into StarRocks:main Apr 12, 2023
@stephen-shelby stephen-shelby deleted the create_iceberg_table branch May 10, 2023 05:58
numbernumberone pushed a commit to numbernumberone/starrocks that referenced this pull request May 31, 2023
Signed-off-by: stephen <stephen5217@163.com>

Support create iceberg table under iceberg catalog. This table will sync to the user meta service. The Spark/hive can also query it.
usage:

create external table iceberg_catalog.iceberg_db.iceberg_table (c1 int, c2 int);
create external table iceberg_db.iceberg_table (c1 int, c2 int);  // current context catalog is iceberg catalog
create external table iceberg_table (c1 int, c2 int); // current context catalog and db are both iceberg.
create external table iceberg_table (c1 int, c2 int ) engine = iceberg;
create external table iceberg_table (c1 int, c2 int, c3 int ) engine = iceberg partition by (c2, c3);
create external table iceberg_table (c1 int, c2 int ) partition by (c2) properties ("location"="hdfs://hadoop:9000/user/warehouse/iceberg_db.db/iceberg_table", "file_format"="parquet");
abc982627271 pushed a commit to abc982627271/starrocks that referenced this pull request Jun 5, 2023
Signed-off-by: stephen <stephen5217@163.com>

Support create iceberg table under iceberg catalog. This table will sync to the user meta service. The Spark/hive can also query it.
usage:

create external table iceberg_catalog.iceberg_db.iceberg_table (c1 int, c2 int);
create external table iceberg_db.iceberg_table (c1 int, c2 int);  // current context catalog is iceberg catalog
create external table iceberg_table (c1 int, c2 int); // current context catalog and db are both iceberg.
create external table iceberg_table (c1 int, c2 int ) engine = iceberg;
create external table iceberg_table (c1 int, c2 int, c3 int ) engine = iceberg partition by (c2, c3);
create external table iceberg_table (c1 int, c2 int ) partition by (c2) properties ("location"="hdfs://hadoop:9000/user/warehouse/iceberg_db.db/iceberg_table", "file_format"="parquet");
southernriver pushed a commit to southernriver/starrocks that referenced this pull request Nov 28, 2023
Signed-off-by: stephen <stephen5217@163.com>

Support create iceberg table under iceberg catalog. This table will sync to the user meta service. The Spark/hive can also query it.
usage:

create external table iceberg_catalog.iceberg_db.iceberg_table (c1 int, c2 int);
create external table iceberg_db.iceberg_table (c1 int, c2 int);  // current context catalog is iceberg catalog
create external table iceberg_table (c1 int, c2 int); // current context catalog and db are both iceberg.
create external table iceberg_table (c1 int, c2 int ) engine = iceberg;
create external table iceberg_table (c1 int, c2 int, c3 int ) engine = iceberg partition by (c2, c3);
create external table iceberg_table (c1 int, c2 int ) partition by (c2) properties ("location"="hdfs://hadoop:9000/user/warehouse/iceberg_db.db/iceberg_table", "file_format"="parquet")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants