Hello,
the target group for this backup solution are experienced Linux administrators. If you do not "speak" fluent shell script you will not be happy with it.
The idea is to run a daily/weekly cron job in the background to sync local data with a remote rclone target in a secure (encrypted) , robust, fast and flexibel way.
The script checks before it starts whether:
- Is our backup infrastructure online?
- Is the necessary upload speed to perform the backup within max. 24h available? (tricky one)
The recommend place for the script and support files: ~/Backups/
The main cronjob script (~/Backups/ACloudbackup.sh):
#!/bin/bash
#
# encrypted backup to cloud - ACloudbackup
#
# uses:
# -----
# rclone - Cloud backup engine
# dd tail awk cut grep date - GNU standard tools
# ifstat2 - https://aur.archlinux.org/packages/ifstat/ - For a reliable, fast, repeatable online upload speed test
# ntpdig - https://aur.archlinux.org/packages/ntpsec/ - For setting the exact system time (preferred).
# or ntpclient - https://aur.archlinux.org/packages/ntpclient/ - For setting the exact system time.
#
# Add rclone and ntpdig to the user who is suposed to run this script in /etc/suduers without pw.
# (and chown + chmod (with arguments) until sudo rclone stops changing permissions of rclone.conf)
# Like:
# ## allow user 'user' to execute all commands in alias ACLOUDBACKUP without password
# Cmnd_Alias ACLOUDBACKUP = /usr/bin/rclone, /usr/bin/ntpdig, /usr/bin/chown -f --reference=/tmp/rclone_conf_perm.tmp /home/user/.config/rclone/rclone.conf, /usr/bin/chmod -f --reference=/tmp/rclone_conf_perm.tmp /home/user/.config/rclone/rclone.conf
# user ALL=(ALL) SETENV:NOPASSWD: ACLOUDBACKUP
#
# RESTORING data
# ==============
# I recommend to mount your rclone target and use a *reliable* file manager like the Midnight Commander
# to interactive restore your data.
# Things you need to do to mount your rclone targets:
# Install: https://archlinux.org/packages/extra/x86_64/fuse2/
# and https://github.com/Linux-project/rclone-wiki/blob/master/rclone-fstab-mount-helper-script.md
# Put the rclone-fstab-mount-helper-script at /usr/local/bin/rclonefs
# /etc/fstab:
# rclonefs#mega: /mnt/mega fuse noauto,user,config=/home/user/.config/rclone/rclone.conf,allow-other,default-permissions,vfs-cache-max-size=500M,max-read-ahead=16M,vfs-cache-mode=full 0 3
# rclonefs#megacrypt: /mnt/megacrypt fuse noauto,user,config=/home/user/.config/rclone/rclone.conf,allow-other,default-permissions,vfs-cache-max-size=500M,max-read-ahead=16M,vfs-cache-mode=full 0 3
# Get sure you enabled "user_allow_other" in /etc/fuse.conf
# Make empty mounting points: sudo mkdir -p /mnt/mega ; sudo mkdir -p /mnt/megacrypt
# with the right permissions: sudo chown user:users /mnt/mega* ; sudo chmod 0755 /mnt/mega*
# mount /mnt/megacrypt or umount /mnt/megacrypt
#
# CRCLONECLOUD='yandexcrypt' # every rclone target (in rclone.conf) should work. Don't put your data unencypted online
CRCLONECLOUD='megacrypt'
# PINGHOST='webdav.yandex.com' # each rclone remote target has a different ping host to assess functionality
PINGHOST='bt1.api.mega.co.nz'
BACKUPSDIR="$HOME/Backups"
RCLONECONF="$HOME/.config/rclone/rclone.conf"
# Minimum (single channel) upload speed in kb/s. Disable speed test with =0
# 110~Cable 63~DSL 12~LTE/Mobile
BACKUPMINSPEED=110
MYHOSTNAME=$(hostname)
#
LOGFILE="$BACKUPSDIR/$0.log"
NTPPOOL='pool.ntp.org'
NTPCLIENT="/usr/bin/ntpdig -S"
# NTPCLIENT="/usr/bin/ntpclient -c1 -s -h"
#
################################################################################################################################
PINGOUT=$(ping -c 1 -W 5 -q $PINGHOST 2>&1)
echo "$PINGOUT" >$LOGFILE # Overwrite the old log file
if (echo "$PINGOUT" | grep -qse "^1 packets transmitted, 1 received"); then
# $PINGHOST is up
# Online upload speed test ?
if [ $BACKUPMINSPEED -gt 0 ]; then
dd if=/dev/random of=/tmp/speed.dummy bs=1 count=1M status=none >/dev/null 2>&1
/usr/bin/rclone -q copy /tmp/speed.dummy $CRCLONECLOUD:/ >>$LOGFILE 2>&1 &
RCLONEPID=$!
IFSTATOUT=""
IFSTATOUT=$(/usr/bin/ifstat2 -z 10 1) # Network interface statistic for 10 sec. 1 time, only non-zerro
kill $RCLONEPID >/dev/null 2>&1 || kill -9 $RCLONEPID >/dev/null 2>&1 # Kill background speed test rclone
rm -f /tmp/speed.dummy >>$LOGFILE 2>&1 ; /usr/bin/rclone delete $CRCLONECLOUD:/speed.dummy >>$LOGFILE 2>&1 # cleanup
UPSPEED=0
UPSPEED=$(echo "$IFSTATOUT" | tail -n 1 | awk '{ print $2 }' | cut -f1 -d'.')
echo "" >>$LOGFILE
if [ $UPSPEED -lt $BACKUPMINSPEED ]; then
# It makes no sense to start a cloud backup if the upload speed is too low to finish it in time
echo "Upload speed $UPSPEED kb/s is too low. Minimum configured upload speed is $BACKUPMINSPEED kb/s" >>$LOGFILE 2>&1
exit 3
else
echo "Upload speed = $UPSPEED kb/s (1 channel, random data. Expect up to 4 times this speed with sync)" >>$LOGFILE 2>&1
fi
fi
echo "" >>$LOGFILE 2>&1
if [ "$NTPCLIENT" != "" ] ; then
echo -n "ntpclient: " >>$LOGFILE 2>&1
sudo $NTPCLIENT $NTPPOOL >/dev/null 2>>$LOGFILE # Set our system time as accurate as possible
else
echo -n "No ntpclient available " >>$LOGFILE 2>&1
fi
date >>$LOGFILE 2>&1
TMPDIRSAVE="$TMPDIR" # Save old $TMPDIR. Modern Linux systems have usualy a RAM $TMPDIR
export TMPDIR="/var/tmp" # A RAM $TMPDIR is too small for big files, set a new $TMPDIR on a mass-storage medium
/usr/bin/touch /tmp/rclone_conf_perm.tmp # Save user rclone.conf permissions because sudo rclone change them (rclone bug?)
/usr/bin/chown -f --reference=$RCLONECONF /tmp/rclone_conf_perm.tmp >>$LOGFILE 2>&1
/usr/bin/chmod -f --reference=$RCLONECONF /tmp/rclone_conf_perm.tmp >>$LOGFILE 2>&1
echo "" >>$LOGFILE 2>&1
/usr/bin/rclone about $CRCLONECLOUD: >>$LOGFILE 2>&1 # start with general info about our rclone target
echo "---------------------------------------------root----------------------------------------------------" >>$LOGFILE 2>&1
sudo --preserve-env /usr/bin/rclone sync /root/ $CRCLONECLOUD:/$MYHOSTNAME/root/ -P --stats 1s --skip-links \
--delete-before --fast-list --create-empty-src-dirs --filter-from $BACKUPSDIR/ToBackupROOT.txt \
--tpslimit 10 --tpslimit-burst 20 --delete-excluded --retries 9 --retries-sleep=10s >>$LOGFILE 2>&1
# Fine tuned parameter for the rclone target. Adjust them to fit your rclone target
echo "---------------------------------------------usr/local-----------------------------------------------" >>$LOGFILE 2>&1
sudo --preserve-env /usr/bin/rclone sync /usr/local/ $CRCLONECLOUD:/$MYHOSTNAME/usr/local/ -P --stats 1s --skip-links \
--delete-before --fast-list --create-empty-src-dirs --filter-from $BACKUPSDIR/ToBackupUSR.txt \
--tpslimit 10 --tpslimit-burst 20 --delete-excluded --retries 9 --retries-sleep=10s >>$LOGFILE 2>&1
echo "---------------------------------------------etc-----------------------------------------------------" >>$LOGFILE 2>&1
sudo --preserve-env /usr/bin/rclone sync /etc/ $CRCLONECLOUD:/$MYHOSTNAME/etc/ -P --stats 1s --skip-links \
--delete-before --fast-list --create-empty-src-dirs --filter-from $BACKUPSDIR/ToBackupETC.txt \
--tpslimit 10 --tpslimit-burst 20 --delete-excluded --retries 9 --retries-sleep=10s >>$LOGFILE 2>&1
echo "---------------------------------------------home----------------------------------------------------" >>$LOGFILE 2>&1
/usr/bin/pacman -Q >$BACKUPSDIR/pacman_installed.txt 2>>$LOGFILE
# Save a up-to-date list of all installed software packages. Adjust to your distro.
# sudo --preserve-env /usr/bin/rclone sync /home/ $CRCLONECLOUD:/$MYHOSTNAME/home/ -P --stats 1s --skip-links \
#--delete-before --fast-list --create-empty-src-dirs --filter-from $BACKUPSDIR/ToBackupHOME.txt \
#--tpslimit 10 --tpslimit-burst 20 --delete-excluded --retries 9 --retries-sleep=10s >>$LOGFILE 2>&1
echo "-----------------------------------------------------------------------------------------------------" >>$LOGFILE 2>&1
/usr/bin/rclone about $CRCLONECLOUD: >>$LOGFILE 2>&1 # Shows the new target storage usage
# /usr/bin/rclone tree $CRCLONECLOUD: >>$LOGFILE 2>&1 # Takes a long time...not sure the info is worth the time
date >>$LOGFILE 2>&1
TMPDIR="$TMPDIRSAVE" # Restore the saved $TMPDIR
export TMPDIR
# Restore the saved rclone user config file permisions. Hopefully fixed soon.
sudo --preserve-env /usr/bin/chown -f --reference=/tmp/rclone_conf_perm.tmp /home/user/.config/rclone/rclone.conf >>$LOGFILE 2>&1
sudo --preserve-env /usr/bin/chmod -f --reference=/tmp/rclone_conf_perm.tmp /home/user/.config/rclone/rclone.conf >>$LOGFILE 2>&1
rm -f /tmp/rclone_conf_perm.tmp >>$LOGFILE 2>&1 # Cleanup
else
exit 2 # Ping failed
fi
Some working example include/exclude files:
~/Backups/ToBackupETC.txt
# Backup include (+) exclude (-) file
# Dirs
- /pacman.d/gnupg/S.**
+ /**
~/Backups/ToBackupROOT.txt
# Backup include (+) exclude (-) file
# Dirs
- /.local/share/Trash/**
- /.cache/**
- *.avi
- *.iso
- *.img
- *.mp4
- *.mkv
- *.ts
+ /**
~/Backups/ToBackupUSR.txt
# Backup include (+) exclude (-) file
# Dirs
+ /local/**
~/Backups/ToBackupHOME.txt
# Backup include (+) exclude (-) file
# Dirs
- /user/.VBox/**.vdi
- /user/Backups/ACloudbackup.sh.log
- /user/Backups/Restore/**
- /user/.cache/**
- /user/.config/**/GPUCache/**
- /user/.config/**/GrShaderCache/**
- /user/.config/Keybase/Cache/**
- /user/.config/BraveSoftware/Brave-Browser/BraveWallet/**.png
- /user/.config/BraveSoftware/Brave-Browser/component_crx_cache/**
- /user/.config/chromium/**.log
- /user/.config/chromium/**.LOG
- /user/.config/Wire/Cache/**
- /user/.config/Wire/Partitions/**
- /user/.thunderbird/*.default/ImapMail/**.msf
- /user/.thunderbird/*.default/global-messages-db.sqlite
- /user/.local/share/Trash/**
- /user/.openjfx/cache/**
- /user/.xsession-error*
- /user/.npm/**
- /user/Downloads/**.crdownload
- /user/Downloads/**.rar
- /user/Downloads/**.zip
- /user/Downloads/**.mp3
- /user/Downloads/**.m4a
- /user/Downloads/**.m4b
- /user/Downloads/**.!qB
- /user/Downloads/**.tar.gz
- /user/Downloads/**.tar.xz
- *.avi
- *.iso
- *.img
- *.mp4
- *.mkv
- *.ts
+ /**
While testing the script, you can watch the progress in the log with:
tail -f ~/Backups/ACloudbackup.sh.log
Have fun.