Description
As per the examples in the post on this blog, one can decrease the time needed to compress files into a gzipped tarball by adding in an external compressor. This would be ideal for multi-core Pi/other embedded ARM boards/etc. (and any other systems that may utilize your scripts, as I'm personally using it to make my life easier on a low-end quadcore amd64 thin client with the network checks commented out, which works quite well in fact!)
by changing the script to read as such...
tar -I pigz --exclude='./backups' --exclude='./cache' --exclude='./logs' --exclude='./jre' --exclude='./paperclip.jar' -pvcf backups/$(date +%Y.%m.%d.%H.%M.%S).tar.gz ./*
...it will now utilize all available cores to make the backup instead of only utilizing a single core and allow the backup to go faster by a factor of 2-3 times. This makes sense both for a faster startup experience for a small server instance and also for a server that has been running for a long time with many chunks, or a pre-generated world with many chunks. Additionally, after this it might be a good idea to throw in a sync
command to be sure the data is fully flushed to disk/SD card/etc. before starting the java instance since it doesn't hurt anything.
I've already tested this in my own startup script and it works splendidly, and while I don't have a modern pi to test it out with, it should be safe to use going forward on those so long as tar is able to recognize the -I pigz
argument with -pvcf
at the end instead.