You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Ignore Intellij Project File
* Make build.sh compatible with macOS
There already was a switch in place for the Python executable, but both the
readlink and cp commands use flags not present on the default macOS binaries.
This commit adds a check upfront and aborts with a message about you needing
the coreutils package from homebrew to get the GNU variants of both commands.
* Add --archive.tar.binary parameter
Allows specifying a custom location of the "tar" command to use.
Also, the flags sent to "tar" are sent individually (`tar -cf` becomes `tar -c -f`).
This allows easily customizing how the archiving is performed without having to add
lots of new options. For example, you could encrypt backup data via a simple shell script
and specify it for --archive.tar.binary:
```
#!/bin/bash
gpg_pubkey_id=XXXXXXX
new_args=""
while [ "${#}" -gt 0 ]; do
case "$1" in
-f)
shift;
original_output_file="${1}"
shift
new_args="${new_args} --to-stdout"
;;
*)
new_args="${new_args} ${1}"
shift
;;
esac
done
tar ${new_args} | gpg --always-trust --encrypt --recipient ${gpg_pubkey_id} -z 0 --output ${original_output_file}
```
This has several advantages:
* Backups are never written to disk unencrypted
* Encryption can be done in one go, instead of causing the potentially heavy additional
I/O a separate encryption step would incur.
* It's transparent for the upload stages, so you can still benefit from the integrated
S3 (or other) uploads.
* Option to fix "S3ResponseError: 403 Forbidden"
The S3 uploader fails if bucket permissions are restricted to only allow
accessing certain prefixes in a bucket. The default behavior for boto's
"get_bucket()" is to "validate" it by accessing the bucket's root, needlessly
breaking the uploader even though all necessary permissions might be present.
This patch adds a new command line switch --upload.s3.skip_bucket_validation
to disable this behavior.
* Related: Fix flake8: Make regex a raw string
* Related: Fix flake8: Make regex a raw string
* Related: Fix flake8: Make regex a raw string
* Fix indentation
* Add --upload.s3.target_mb_per_second parameter
Boto2 unfortunately does not provide a bandwidth limiter for
S3 uploads. Instead, it will upload a completed backup as quickly
as possible, potentially consuming all available network bandwidth
and therefore impacting other applications.
This patch adds a very basic throttling mechanism for S3 uploads
by optionally hooking into the upload progress and determining
the current bandwidth. If it exceeds the designated maximum, the
upload thread will pause for a suitable amount of time (capped
at 3 seconds) before resuming.
While this is far from ideal, it is an easy to understand and
(from my experience) good enough method to protect other network
users from starvation.
Notice: The calculation happens per thread!
* Fix handling of unspecified bandwidth limit
* Add S3 upload bandwidth limit to example config
* Related: Add tar.binary to example config file
* Related: add skip_bucket_validation to example conf
0 commit comments