How to keep vfs cache on disk indefinitely

I'm using rClone to mount a Wasabi S3 bucket to be used for offsite backups through Veeam. I'm using rclone mount to mount Wasabi for the Veeam repository. The disk I'm using for caching is 27TB and can expand if needed.

I'd like to always keep a full cache locally on the repository server and then upload any changes to Wasabi. Similar to how the Google Drive client would function, local changes are kept local and mirrored on the remote. This should keep Veeam fast since it won't have to download files from the cloud to read them.

The wasabi remote is mounted at: /data/wasabi/bucket/veeam-backups/repo

Below is my SystemD unit file

[Unit]
Description=Wasabi S3 (rclone)
AssertPathIsDirectory=/data/wasabi/bucket/veeam-backups/repo
After=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount \
        --config /root/.config/rclone/rclone.conf \
        --allow-other \
        --cache-tmp-upload-path /data/wasabi/scratch/upload/veeam-backups \
        --cache-dir /data/wasabi/scratch/cache/veeam-backups \
        --cache-chunk-path /data/wasabi/scratch/chunks/veeam-backups \
        --cache-db-path /data/wasabi/scratch/veeam-backups-cache-db \
        --cache-workers 8 \
        --vfs-cache-mode full \
        --checkers 16 \
        -vv \
        wasabi:veeam-backups/repo /data/wasabi/bucket/veeam-backups/repo

ExecStop=/bin/fusermount -u /data/wasabi/bucket/veeam-backups/repo
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

I was wondering if I would be able to get the desired result though "--vfs-cache-max-age" or some other method?

Thanks in advance for looking at this.

There isn't an "infinite" setting for --vfs-cache-max-age, but you could use --vfs-cache-max-age 1000000h or something like that.

1 Like

Thanks, I set the cache age to 10 years. If I still have that server running in 10 years I have other issues.

1 Like

If you set VFS cache max age, and max size to some arbitrarily large number then files should never expire from the cache.

That said - if you are going for a system where you want to have all the files locally and just have a synced backup in the cloud then I would just forgo the mount completely as it is not necessarily and just adds complications and overhead.

I recommend you just run a sync script on a timer (cron or task-scheduler) that syncs the local files to the cloud every hour (or nightly or whatever seems appropriate). That will ensure your backup is always 1:1 to our local files, and your local files will not have to deal with any of the drawbacks or complications of a mount.

I would also recommend using the --track-renames flag on the sync command to rclone can deal intelligently with files getting moved around on the local storage. This will make rclone smartly server-side move or rename files that just got reorganized rather than re-uploading every files that had a change to it.

Example of command:
rclone sync C:\MySyncableFiles\ MyWasabiRemote: --fast-list --track-renames

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.