Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix](multi-catalog) Fix hadoop viewfs issues. #24507

Merged
merged 1 commit into from
Sep 18, 2023

Conversation

kaka11chen
Copy link
Contributor

Proposed changes

Error Msg:

Caused by: org.apache.doris.datasource.CacheException: failed to get input splits for FileCacheKey{location='viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table', inputFormat='org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'} in catalog test_viewfs_hive
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:466) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
Caused by: org.apache.doris.common.UserException: errCode = 2, detailMessage = Failed to list located status for path: viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table
        at org.apache.doris.fs.remote.RemoteFileSystem.listLocatedFiles(RemoteFileSystem.java:54) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.getFileCache(HiveMetaStoreCache.java:381) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:432) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
Caused by: java.nio.file.AccessDeniedException: viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Invoker.onceInTheFuture(Invoker.java:190) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.next(Listing.java:651) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$FileStatusListingIterator.requestNextBatch(Listing.java:430) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$FileStatusListingIterator.<init>(Listing.java:372) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:143) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing.getListFilesAssumingDir(Listing.java:211) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:4898) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listFiles$38(S3AFileSystem.java:4840) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:4839) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.doris.fs.remote.RemoteFileSystem.listLocatedFiles(RemoteFileSystem.java:50) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.getFileCache(HiveMetaStoreCache.java:381) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:432) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more

Solutions:

Fix hadoop viewfs issue which introduced by #24242.

Further comments

If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...

@kaka11chen
Copy link
Contributor Author

run buildall

@doris-robot
Copy link

(From new machine)TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 47.71 seconds
stream load tsv: 618 seconds loaded 74807831229 Bytes, about 115 MB/s
stream load json: 20 seconds loaded 2358488459 Bytes, about 112 MB/s
stream load orc: 64 seconds loaded 1101869774 Bytes, about 16 MB/s
stream load parquet: 31 seconds loaded 861443392 Bytes, about 26 MB/s
insert into select: 29.1 seconds inserted 10000000 Rows, about 343K ops/s
storage size: 17162378635 Bytes

Copy link
Contributor

@xxiao2018 xxiao2018 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@github-actions
Copy link
Contributor

PR approved by anyone and no changes requested.

Copy link
Contributor

@morningman morningman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@github-actions github-actions bot added the approved Indicates a PR has been approved by one committer. label Sep 18, 2023
@github-actions
Copy link
Contributor

PR approved by at least one committer and no changes requested.

@yiguolei yiguolei merged commit a07f59d into apache:master Sep 18, 2023
morningman pushed a commit that referenced this pull request Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by one committer. dev/2.0.2-merged reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants