Skip to content

[FLINK-37247][FileSystems][Tests] Implement common Hadoop file system integration tests for GCS. #26102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

cnauroth
Copy link
Contributor

@cnauroth cnauroth commented Feb 3, 2025

What is the purpose of the change

Increase test coverage for GCS integration by providing integration tests implemented from the common flink-hadoop-fs test suites. These optional tests run only if the environment is configured for access to GCS.

Brief change log

  • Create JUnit extension RequireGCSConfiguration. The extension checks for environment variables GOOGLE_APPLICATION_CREDENTIALS and GCS_BASE_PATH. If these are not defined, then the tests are skipped.
  • Create GCS subclasses of the suites defined in flink-hadoop-fs.
  • Two tests needed to be overridden in the subclasses and skipped. One isn't meaningful for GCS and triggers a false failure. The other is going to take a lot more intrusive changes to get working on GCS because of an assumption that state is held in exactly one file, which isn't true for the GCS implementation.
  • Allow RecoverableWriter tests to return null from getLocalTmpDir() to indicate no allocation/release of local storage is required.
  • A new ArchUnit exception is added, because these integration tests can't meaningfully use the mini-cluster extensions. This is similar to how it's handled for other file systems.

Verifying this change

This change added tests and can be verified as follows:

New integration tests are skipped when run without setting the configuration, so there is no impact to existing developer workflows:

mvn -Pfast -pl flink-filesystems/flink-gs-fs-hadoop clean verify
...
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
...

With configuration in place, the tests pass (with the exception of the two that are skipped):

export GOOGLE_APPLICATION_CREDENTIALS=/tmp/credentials.json
export GCS_BASE_PATH=gs://<BUCKET>/flink-tests
mvn -Pfast -pl flink-filesystems/flink-gs-fs-hadoop clean verify
...
[WARNING] Tests run: 35, Failures: 0, Errors: 0, Skipped: 2
...

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no
  • If yes, how is the feature documented? not applicable

@flinkbot
Copy link
Collaborator

flinkbot commented Feb 3, 2025

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

@cnauroth
Copy link
Contributor Author

cnauroth commented Feb 4, 2025

I pushed up a change, running mvn spotless:apply to fix the style violations.

The test failures appear to be unrelated.

@cnauroth
Copy link
Contributor Author

Hello @MartijnVisser . Are you available for a code review?

I'm planning on sending another pull request later for another GCS connector version upgrade. It would be nice to have these new tests in place in a known working state first.

Thanks!

@cnauroth
Copy link
Contributor Author

@MartijnVisser , gentle reminder requesting code review here on #26102 adding integration test coverage for the GCS file system path. Then, I'm also aiming for a GCS dependency upgrade: #26160 .

I'll also try @dannycranmer and @xintongsong in case Martijn isn't available.

Thanks everyone!

Copy link

This PR is being marked as stale since it has not had any activity in the last 90 days.
If you would like to keep this PR alive, please leave a comment asking for a review.
If the PR has merge conflicts, update it with the latest from the base branch.

If you are having difficulty finding a reviewer, please reach out to the
community, contact details can be found here: https://flink.apache.org/what-is-flink/community/

If this PR is no longer valid or desired, please feel free to close it.
If no activity occurs in the next 30 days, it will be automatically closed.

@github-actions github-actions bot added the stale label May 27, 2025
@cnauroth
Copy link
Contributor Author

I would still like to contribute this PR. I'll reach out to dev@flink.apache.org to ask for help with review.

@github-actions github-actions bot removed the stale label May 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants