I got nervous about storing all of my credentials and encryption keys on a remote server and decided to encrypt my rclone config file. At boot time I mount an encfs volume to /home interactively, so if the machine ever loses power everything is safe, in theory; however, as an exercise in paranoia I wanted to protect against someone with physical access to the server.
Encrypting the config file created a bit of a problem because I need rclone to be able to run from scripts and have systemd keep an endpoint mounted, but storing the password on the server would defeat the purpose of encrypted the config file.
The server is running Ubuntu, so one option I considered was writing it to /dev/shm, but I would image that someone with physical access to the machine could easily read that file and then have free-run of my encrypted files stored in the cloud. I already use gpg-agent for SSH, but I have found it to be a royal PITA to get working properly in scripts, so I don't consider that a viable option. Ditto for PAM-based solutions. Those are also difficult to roll up into the simple deployment scripts I use when I migrate servers.
The solution I landed on was to have the scripts set the
RCLONE_CONFIG_PASS environmental variable by calling a URL that reads the password off of my server at home over an HTTPS connection that only allows the IP address of my remote server.
I then can easily shut off that URL, denying the remote server access to the password and, in theory, protecting the contents of my config file in response to any suspicious activity on the server.
This solution has worked very well so far. It is relatively simple and portable, but is also a bit cumbersome because I have to wrap everything rclone-related in a shell script to fetch the password and set
Would it be possible to have a switch like
--rclone-config-pass-url that allows rclone to fetch the password directly? Or is there, perhaps, an easier way of separating the config password from the server for non-interactive tasks?