Limit VFS cache directory while uploading

What is the problem you are having with rclone?

I serve a local running webdav with rclone serve webdav to a pcloud remote running in a docker container.
The container has the rclone.config file mounted as read-only on /config/rclone/rclone.conf and the /root/.cache/rclone dir mounted with read-write to a local folder.

The environment variables in use are:

RCLONE_VFS_CACHE_MAX_SIZE=2G
RCLONE_VFS_CACHE_MODE	
RCLONE_LOG_LEVEL=DEBUG

(I have removed the env variables regarding webauth)

I am uploaded backups using Hyper Backup running on a Synology NAS.
Due to space concern I want to limit the maximum size of the mounted /root/.cache/rclone directory.

Is there an option in rclone which sets a maximum "virtual" size on the directory in question, essentially limiting the files rclone can put/read there?

My internet is terrible slow and I fear that a backup is "too fast" to compress the data and writes them to the cache dir, while the uploading part still takes a good while and that it corrupts the database that way.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2
- os/version: alpine 3.19.0 (64 bit)
- os/kernel: 4.4.302+ (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.6
- go/linking: static
- go/tags: none
2024/03/11 20:16:53 DEBUG : rclone: Version "v1.65.2" finishing with parameters ["rclone" "version"]

Which cloud storage system are you using? (eg Google Drive)

pcloud EU

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone serve webdav pcloud:/

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

2024/03/11 20:18:18 DEBUG : Using config file from "/config/rclone/rclone.conf"

[pcloud1]
type = pcloud
hostname = eapi.pcloud.com
token = XXX

[pcloud2]
type = pcloud
hostname = eapi.pcloud.com
token = XXX
### Double check the config for sensitive info before posting publicly
2024/03/11 20:18:18 DEBUG : rclone: Version "v1.65.2" finishing with parameters ["rclone" "config" "redacted"]

welcome to the forum,

how about this flag

--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)

Hello and thanks!

I read that as "minimum" free size, not as a "maximum". Is this a wording issue?

But I will test this option the next time I have a chance.

well, if you reserve enough free space, that should prevent corrupted database.

if the synbox has 100TiB of storage and want rclone max out at 80TiB, then set
--vfs-cache-min-free-space=20T

another option, synology supports quotas.

with your setup, once a file is uploaded, it stays in the cache for 1 hour.
should reduce that using something like --vfs-cache-max-age=5m

Yes but with caveats:

If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

So it won't be hard limit. If you set limit to 1GiB but write to cache 10 GiB it will have to use 10GiB.

IMO it would be much better not to try to fake anything with rclone cache as a middle layer if you can not afford it disk space wise. For hyperbackup try serving your remote without any cache or use different software like rustic/kopia for your backups.

Hello,

I tried serving without any cache at first, and this is the second try with cache now.
The first cache eventually corrupted the .hbk database and wouldnt allow any more backups.
Thus "destroying" a big backup already (well, it was good for restoration purposes).

I see if I can maybe linux mount a folder on my NAS volumnes and only make is 100Gb - that way rclone cannot exceeded that and "sees" that while reading/writing/acceping files via webdav - in theory?

(Only to be tested previously with a smaller size is how Hyperbackup likes that. "Oh, Webdav doesnt accept any more files, I'll wait until it does again or stop right here and now?)

(Hyperbackup chunks down files, after compression, into 50Mb chunks with some extra meta data that grow bigger and bigger aswell)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.