What's the recommended way of mounting in Linux (Desktop and/or Server)

So I seriously love rclone and I'm using on my self hosted servers.
Until now I used rclone mounts as a systemd service and it worked perfectly fine.

But now I want to configure an encrypted rclone mount on a Linux desktop system (my parents notebook) for a backup (probably PikaBackup installed via Flatpak).

The storage could be mounted on demand (i.e. when PikaBackup tries to access it) or all the time. But it should be user writable and available without any manual process when the backup process runs.

Therefore I'm curious what kind of mount variant is suggested in that case.
It should automatically reconnect if necessary (i.e. when there were some connection issues or the notebook woke up from suspend).

I saw that there are possibilities to use fstab entries now and some other users are using automount or this wrapper script. Or maybe there's even a solution of having it visible via gvfs or so?

So I'm curious about your suggestions.
Thank you in advance!

PS: this is how my service file looks on my self hosted server (I think it was based on the state of this wiki page back then). I guess I should add --bwlimit for the desktop system though, to avoid running into bandwidth issues.

Description=RClone Mount Service (cloud_name)

# or simple, dbus
ExecStart=/some/bin/directory/rclone mount \
        --config=/path/to/configuration \
        --no-modtime \
        --allow-non-empty \
        --log-file /some/log/directory/rclone_cloud_name.log \
        --log-level DEBUG \
        cloud_name:remote/path /some/local/path/cloud_name
ExecStop=/bin/fusermount -u /some/local/path/cloud_name
# or on-failure

# hardening


I cannot give you any Linux specific advice (I am primarily on Windows), but I would also consider how to best protect the backup from a ransomware attack.

From where should the ransomware appear?
All application installations for the Desktop system are done via containerized applications (simple installations from official Flatpak and Snapcraft repositories) and the Desktop system never leaves the home WiFi behind a NAT.

And the network storage is from a professional paid and trustworthy hoster.

No mail, no browser, nobody clicking/doing something wrong by accident?
(I can't speak for Linux, but these are the most common sources on Windows)

Well, luckily they don't execute downloaded binaries or something like that.
All applications are coming either from flatpak or snapcraft package managers (as well as their updates obviously).

And some whil ago the Gnome folks removed the ability to execute binary files via Nautilus (the file manager in Gnome).
So it's not like in Windows, where you could double click on an executable and something weird happens.

So for executing malware a flaw in an application should be used. And the damage is pretty limited since applications are running within restricted containers (Flatpak and Snap).
The Browsers have access to the Download directory (and I think the email client as well) but it would be totally fine if the Download directory would be messed up and maybe that respective app data as well.

But other than that it is probably not trivial to break out from containers and damage the whole system. At least this is my understanding from having containerized applications.

So I'm probably missing something but I guess chances of catching ransomware are pretty low.
But I'm very open, if you have some recommendation.

And for now I'm curious what's the recommended way of having an rclone mount on a Linux desktop.

It all depends on how it's being used.

rclone mount remote:

Is probably the best place to start and change things based on the use case.

Most folks use systemd services.
Some folks use screen/leave it running.
Some folks use a --daemon script.

1 Like

Sure looks like you are doing a good job at keeping the risk at a minimum, but nobody/nothing is perfect.

My point is that there is an inherent risk that somebody (malicious) gets user or root permisions on your box, so make it difficult/impossible for them to access/find your backup - especially by an automated ransomware script.

Things I would consider in your situation:

  • Only mount while the backup is running
  • Make it impossible to access your remotes just by doing a simple scan for rclone.config - make it encrypted and use a script/command to unlock it (encryption password in key chain or similar)
  • Advanced: make it impossible to modify/delete the backup even if rclone.config is compromised.

hi, welcome to the forum,

what i do is very simple, works on windows and linux.
when trying to create backups, best to create on-the-fly, rclone mount and net shares.
less chance of ransomware and other issues such as human error.

any good backup program, will allow for the execution of scripts, before and after the backup runs
and if not, just write a simple script

  1. start the rclone mount, either hard code it into the script, or via systemd.
  2. run the pikabackup.
  3. kill the mount

with backup files, i do not copy direct to cloud via rclone mount.
instead, backup to a net share of a server or a local dest.
and then rclone copy/move the backup files to the cloud.

also, a lot of backup programs that create incremental backups, need access to older backup files and config files.
if using a rclone mount, and pikabackup needs to access older files, rclone would have to download them to the local vfs file cache.
might want to test for that.

hope that is helpful.

The hoster that I'm using has integrated file versioning. So we could go backwards even if malware would encrypt the backup.
This really isn't the main concern here.

Maybe I didn't explain it properly but I'm rather curious what's the recommended type of mount.

So for instance: what are possible advantages or disadvantages of mounting with systemd, fstab or automount (or maybe even other alternatives)?

also, a lot of backup programs that create incremental backups, need access to older backup files and config files.
if using a rclone mount, and pikabackup needs to access older files, rclone would have to download them to the local vfs file cache.
might want to test for that.

That's an interesting point.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.