Skip to content

HDFS-16930. Update the wrapper for fuse-dfs #5449

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: trunk
Choose a base branch
from

Conversation

chaohengstudent
Copy link

Description of PR

update path with LIBHDFS_PATH
add path to LD_LIBRARY_PATH with compatible to java11
edit CLASSPATH

How was this patch tested?

manual test

root@6b76602de66c:/opt/hadoop# ./fuse_dfs_wrapper.sh -d hdfs://192.168.103.44:14370 /mnt/hdfs
INFO /home/chaoheng/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
INFO /home/chaoheng/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
FUSE library version: 2.9.9
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 2, opcode: INIT (26), nodeid: 0, insize: 104, pid: 0
INIT: 7.36
flags=0x73fffffb
max_readahead=0x00020000
INFO /home/chaoheng/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://192.168.103.44:14370, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
fuseConnectInit: initialized with timer period 5, expiry period 300
   INIT: 7.19
   flags=0x00000039
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 2, success, outsize: 40


For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 12m 53s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 shelldocs 0m 1s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
-1 ❌ mvninstall 37m 41s /branch-mvninstall-root.txt root in trunk failed.
+1 💚 mvnsite 0m 28s trunk passed
+1 💚 shadedclient 19m 31s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 15s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 0m 17s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 shadedclient 19m 1s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 3m 38s hadoop-hdfs-native-client in the patch passed.
+1 💚 asflicense 0m 39s The patch does not generate ASF License warnings.
97m 29s
Subsystem Report/Notes
Docker ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5449/1/artifact/out/Dockerfile
GITHUB PR #5449
Optional Tests dupname asflicense mvnsite unit codespell detsecrets shellcheck shelldocs
uname Linux a0da5c06db93 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 6856251
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5449/1/testReport/
Max. process+thread count 558 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5449/1/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@jojochuang jojochuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Has this been tested on different machines, such as x86 Linux or Mac?

while IFS= read -r -d '' file
do
export CLASSPATH=$CLASSPATH:$file
done < <(find "$HADOOP_HOME/hadoop-hdfs-project" -name "*.jar" -print0)
done < <(find "$HADOOP_HOME/hadoop-tools/hadoop-distcp" -name "*.jar" -print0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's been a while but I don't think fuse-dfs relies on distcp jars

@lfrancke
Copy link
Member

If some of these things are not clear then we can maybe split out the bits that can definitely be fixed.
e.g. the LIBHDFS_PATH fix.

I've stumbled across this as well tonight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants