Encryption problem with rclone

I was recommended rclone by someone and I am using the rclone crypt parameter for storing encrypted backup files (in .bin format).

Problem is that this encrypted file - as I can understand - can only be decrypted using rclone.

Is it somehow possible to use rclone to upload backups to cloud encrypted with gpg or something that has tools easily available on all OS so the backup can be downloaded to any OS and decrypted locally to view the content?

That is correct…

However rclone is available for Windows, Mac, Linux, FreeBSD etc. There is also an Android version. I think the only major platform missing is IOS.

You can encrypt your backups with gpg or some other tool before uploading.

I am not sure someone can help me with this here, but I will say it anyway.
My requirement is for a script that uses rclone and backs up selected folders and files + sql dump to azure or AWS cloud storage using a daily cron job.

I had a coder give me this script below, but I don’t know if this code is correct (or is missing something), and how to add GPG encryption to it. Can someone help me here?

#!/usr/bin/env bash
set -e

container=backup1                                                  # container path on Azure Blob
archive_file=/tmp/backup_$(date '+%Y-%m-%d_%H%M%S').tar.gz   # temporary archive file
backend="azure:${container}"
sql_dump_file=/root/dump.sql

export RCLONE_CONFIG_AZURE_TYPE=azureblob
export RCLONE_CONFIG_AZURE_ACCOUNT=name
export RCLONE_CONFIG_AZURE_KEY=key

mysqldump --quick --single-transaction --all-databases > "${sql_dump_file}"

tar -czf "${archive_file}" var/www root   # list the files/dirs to include in the backup
rclone copy "${archive_file}" "${backend}/dump.sql"
rm "${archive_file}"
rm "${sql_dump_file}" # rclone ...

What you want to do is encrypt the file with the public key so only you can decrypt it with the private key.

I found some blog posts with more info

I am unable to write in bash. Does this community have any freelance bash script coders?

I know this is a long time after @goliath asked this question, but I just joined the forum and found it after searching for “gpg encrypt”.

I ran into the same issue and started developing a script for encrypted backups with Backblaze B2. Pretty much the only part that should change is how you sync your backup afterwards, so I’ll throw it up here, both for comment and in the hope that it helps you.

#!/usr/bin/env bash

gpgopts=( "--no-encrypt-to" "--yes" "--quiet" "--compress-algo=none" )
gpgids=()
datadir="${XDG_DATA_DIR:-${HOME}/.local/share}/bkup"
cachedir="${XDG_CACHE_DIR:-${HOME}/.cache/bkup}"
dirpath=""
salt=""
backupbucket="<bucket name here if you want to sync automatically>"
compress=1

shopt -s nullglob
set -o pipefail

gen_salt()
{
    tr -cd 'a-f0-9' < /dev/urandom | head -c 32
}

get_checksum()
{
    sha1sum <<< "$(realpath --relative-to="${HOME}" "${datadir}")${salt}" | cut -d' ' -f1
}

encrypt()
{
    gpg -e "${gpgids[@]/#/-r }" -o "$1"
}

make_repository()
{
    mkdir -p "${datadir}"
    mkdir -p "${cachedir}"
    if [ ! -e "${datadir}/.salt.gpg" ]
    then
	salt="$(gen_salt)"
	gpg -e "${gpgids[@]/#/-r }" -o "${datadir}/.salt.gpg" "${gpgopts[@]}" <<< "${salt}"
	ln "${datadir}/.salt.gpg" "${cachedir}/.salt.gpg"
    else
	salt=$(gpg -d "${gpgopts[@]}" "${datadir}/.salt.gpg")
    fi
    if [ ! -e "${datadir}/.name.gpg" ]
    then
	gpg -e "${gpgids[@]/#/-r }" -o "${datadir}/.name.gpg" "${gpgopts[@]}" <<< "$(realpath --relative-to="${HOME}" "${datadir}")"
	ln "${datadir}/.name.gpg" "${cachedir}/.name.gpg"
    fi
}

bkup_init()
{
    local dir="$1"
    shift
    for i in "$@"
    do
	gpgids+=("$i")
    done
    datadir="${datadir}/$(realpath --relative-to="${HOME}" "$dir")"
    cachedir="${cachedir}/$(realpath --relative-to="${HOME}" "$dir")"
    dirpath="$dir"
    make_repository
}

bkup_backup()
{
    filename="$(get_checksum)"
    gpgidss="${gpgids[@]/#/-r }"
    gpgoptss="${gpgopts[@]}"
    export -- gpgidss
    export -- filename
    export -- gpgoptss
    if [ "$compress" == 1 ]
    then
	tar -cp "${dirpath}" | zstd -T8 -19 | split -a 6 -b 50M -d --filter='gpg ${gpgoptss} -e ${gpgidss} -o "${FILE}".gpg' - "${cachedir}/${filename}".tar.zst.
    else
	tar -cp "${dirpath}" | split -a 6 -b 100M -d --filter='gpg ${gpgoptss} -e ${gpgidss} -o "${FILE}".gpg' - "${datadir}/${filename}".tar.
    fi
}

bkup_sync()
{
    rclone sync -P "${cachedir}" B2:"${backupbucket}/$(get_checksum)"
}

bkup_init "$@"
bkup_backup
# bkup_sync
echo "Backup done!"

I have an example bkup_sync function, but as you might be able to tell, I’ve commented it out when running the script right now, since syncing may take a long time. .salt.gpg stores the unique salt and .name.gpg stores the path (relative to $HOME) of the data directory (so ~/.local/share/bkup/<whatever> in the default case).

To give you an example of how this is called, I might run something like:
bkup .config '3DF33DB92735EDAFA847FF74EA24DF493F2BDC3C!' '906662B4055AFB85DC797614D04E3D0A14252E37!' which would do the following:

  • Create a directory ~/.local/share/bkup/.config/ with .salt.gpg and .name.gpg.
  • Create a directory ~/.cache/bkup/.config/ with .salt.gpg, .name.gpg, and <sha1sum>.tar.zst.000000.gpg through <sha1sum>.tar.zst.nnnnnn.gpg.
  • Encrypt all of the aforementioned GPG files with the (sub)keys 3DF33DB92735EDAFA847FF74EA24DF493F2BDC3C and 906662B4055AFB85DC797614D04E3D0A14252E37.

Running rclone sync ~/.cache/bkup/.config/ <backend>:/path/to/folder should do the trick for uploading.

Hope this helps, and let me know if something’s confusing here!

Please don’t necro bump old topics.

Thanks.