Skip to content

Add --upload.file_regex parameter #293

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

dschneller
Copy link
Contributor

We have seen cases where due to aborts in earlier steps
the upload stage would eagerly upload lots of files that
were supposed to go into an archive to S3.
This new optional parameter allows specifying a regular
expression for file names that actually should be uploaded,
regardless of what else might be in the source directory.

For example specify --upload.file_regex='.+\.tar$' to ensure only
tars are ever sent.

There already was a switch in place for the Python executable, but both the
readlink and cp commands use flags not present on the default macOS binaries.

This commit adds a check upfront and aborts with a message about you needing
the coreutils package from homebrew to get the GNU variants of both commands.
Allows specifying a custom location of the "tar" command to use.
Also, the flags sent to "tar" are sent individually (`tar -cf` becomes `tar -c -f`).

This allows easily customizing how the archiving is performed without having to add
lots of new options. For example, you could encrypt backup data via a simple shell script
and specify it for --archive.tar.binary:

```
#!/bin/bash
gpg_pubkey_id=XXXXXXX
new_args=""

while [ "${#}" -gt 0 ]; do
  case "$1" in
    -f)
      shift;
      original_output_file="${1}"
      shift
      new_args="${new_args} --to-stdout"
      ;;
    *)
      new_args="${new_args} ${1}"
      shift
      ;;
  esac
done

tar ${new_args} | gpg --always-trust --encrypt --recipient ${gpg_pubkey_id} -z 0 --output ${original_output_file}
```

This has several advantages:

* Backups are never written to disk unencrypted
* Encryption can be done in one go, instead of causing the potentially heavy additional
  I/O a separate encryption step would incur.
* It's transparent for the upload stages, so you can still benefit from the integrated
  S3 (or other) uploads.
The S3 uploader fails if bucket permissions are restricted to only allow
accessing certain prefixes in a bucket. The default behavior for boto's
"get_bucket()" is to "validate" it by accessing the bucket's root, needlessly
breaking the uploader even though all necessary permissions might be present.

This patch adds a new command line switch --upload.s3.skip_bucket_validation
to disable this behavior.
We have seen cases where due to aborts in earlier steps
the upload stage would eagerly upload lots of files that
were supposed to go into an archive to S3.
This new optional parameter allows specifying a regular
expression for file names that actually should be uploaded,
regardless of what else might be in the source directory.

For example specify --upload.file_regex='.+\\.tar$' to ensure only
tars are ever sent.
@dschneller dschneller changed the title Add ==upload.file_regex parameter Add --upload.file_regex parameter Dec 6, 2018
Copy link
Contributor

@timvaillancourt timvaillancourt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@timvaillancourt timvaillancourt merged commit 30c5704 into Percona-Lab:master Dec 7, 2018
@dschneller dschneller deleted the upload-file-regex-filter branch December 13, 2018 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants