-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Fix treatment of "#" in S3Hook.parse_s3_url() #41796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
|
vincbeck
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love it!
|
Static checks are failing, running the pre-commit should auto resolve them |
8fa89da to
ca4aa16
Compare
|
static tests are failing |
The current implementation of parse_s3_url will truncate a key if it contains an octothorpe character. By passing the allow_fragments=False argument to urlsplit, keys will be correctly parsed.
|
Awesome work, congrats on your first merged pull request! You are invited to check our Issue Tracker for additional contributions. |
Apache Airflow version
2.8.4 in my environment, but the issue is still present in main
What happened
A client submitted an S3 file to my workflow with an octothorpe in the filename, essentially
s3://my-bucket/path/to/key/email campaign - PO# 123456_REPORT.csv. When my Airflow DAG tried to parse this URL, part of the filename was lost:What you think should happen instead
The key should not be truncated. The result of the above example should be
('my-bucket', 'path/to/key/email campaign - PO# 123456_REPORT.csv')How to reproduce:
Call
S3Hook.parse_s3_url()with a#character in the S3 URL. Everything after the#is lost, because urllib.parse.urlsplit() is current called with the default optionallow_fragments=True.This PR passes
allow_fragments=Falsetourlsplitto prevent this error. As far as I can tell, there are no valid cases of#in an S3 key being treated as a fragment, and no existing GitHub issue to fix this.Operating System
Ubuntu 22.04
Versions of Apache Airflow Providers
Deployment
This may be reproduced without deploying
Are you willing to submit PR?
Code of Conduct