-
Notifications
You must be signed in to change notification settings - Fork 0
Configure
Open the s3backup.conf file for editing. All configurable settings are in this file.
Any line beginning with “#” is a comment and interpolation is supported, so you can use the value of one option in the value of another within the same section by using “%(variable_name)s
”.
First set the company
variable. This is used as a prefix for the logger, eg: company = name
will result in log entries like:
2012-05-12 13:44:55,961: name.s3backup (171)- key: <Key: mybucketname,mypcname/daily/20120512.zip>
If you wish to change the log format, the code is in log.py.
company = simplify
Next come the AWS settings. I recommend creating a different AWS user for each client/location being backed up for greater security.
Set keypath
to the location of your AWS keys. It can be an absolute path or be relative to the current directory. The keys file should have the access key on the first line, and the secret key on the next. All other lines are ignored, so you can add random text if you wish.
keypath = keys.s3
The bucket_name
allows for grouping by organization (or building, or galaxy, etc). For example, if backing up computers at multiple locations, each location gets a bucket. Note that bucket names must be unique across all of AWS.
Also set the machine_name
. Each computer gets a name to uniquely identify it.
bucket_name = simplify_main_office
machine_name = rjf_laptop
Now we configure the directory settings. Use base_directory
if the backup lists will be in the same location; otherwise, set base_directory
to None
and use absolute paths for the other directory variables.
base_directory = /some/dir
daily_list = daily.s3
weekly_list = weekly.s3
monthly_list = monthly.s3
destination
determines where to create the archive before uploading to AWS, and is the default download destination when restoring. The log_path
sets the log file to use. These do not use the base directory, but interpolation allows you to do so.
destination = /tmp/backup
log_path = %(base_directory)s/backup.log
Set the directory to store the hash files (used for creating incremental backups). We upload the hash file created for each backup, but we can also keep a local copy. If you don’t want to keep them and you don’t keep local copies of the backups, you can set hash_file_path = destination
.
hash_file_path = %(base_directory)s/hash_files
When testing, keep raise_log_errors
at True
, but set it to False
for production.
raise_log_errors = False
If you don’t want to keep the archive after making a backup, s3-backup can automatically delete it. If set to True
, the entire archive directory (destination
, set previously) will be deleted.
delete_archive = True
Set use_encryption
to True
if you want encryption, and set the encryption password. You need this to restore encrypted backups, so don’t lose it. The password is hashed, then that hash is used to encrypt the data. For greater security and control, all encryption is handled prior to transferring to AWS rather than relying on server-side encryption.
Also set passwd_hash_type
here; If you’re using pycrypto 2.4+, you can use MD5, SHA, SHA256, or SHA512. Previous versions of pycrypto are limited to MD5 and SHA.
use_encryption = True
encryption_password = 'Some text to be hashed'
passwd_hash_type = SHA512
Now set the archive compression method, and we’re done! Valid types are “none” (uncompressed tar archive, and note that this is “none”, not None), “gz”, “bz2”, and “zip”.
compression = bz2
Now we’re done! Next you need to create the backup lists, then test the system.