Rclone cache set unlimited time?

I use the following settings:

--attr-timeout=1h \

I have enough disk space to cache 90-100% of the backend so I would rather never expire things from cache, unless disk space is needed.

I have seen rclone taking things off cache even with disk is 50% free just because of old age.

Are objects evicted from cache based on age regardless of last access time? Like it will expire at 2190h even if it was used 10 minutes ago?

I also aim to get maximum read speeds and low latency is there anything I could do different ?


post a debug log that demonstrates the problem and which file has that problem.

That is time of last use.

You can write --vfs-cache-max-age=100y if you really want!

y is not valid unit suffix.

--- as per rclone docs,
Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

--- from a debug log
Fatal error: invalid argument "100y" for "--vfs-cache-max-age" flag: time: unknown unit "y" in duration "100y"

1 Like

Just playing with only "9"s, I could set "999999h" (~114 years) but not "9999999h". But I think 114 years should be enough!

funny, in the past, i played around and ended up with the same value.

Can you tell how rclone would behave with such large cache times?

I would to like the exact behavior, what it will evict from cache first if disk space is needed, etc anything you can share

also what is the best way to evict a entire folder from cache?

stopping rclone and deleting the folder? which folders i should delete?

--- rclone will only purge a file in the vfs file cache after --vfs-cache-max-age expires.

--- if the total size of all files in the vfs fie cache exceeds --vfs-cache-max-size, then rclone will purge files.
to quote ncw,
"When the cache fills up rclone make a list of all the files, sorts them by last accessed then deletes the least recently used until there is enough storage space."

note: rclone will never purge an in-use file from the vfs file cache

how can i evict a big folder from rclone, with rclone running?

as far as i know, that is not possible.

tho, imho, if you did try to delete the files,
then rclone should be robust enough to handle that.

so, as a test, i would delete a dir from the vfs dir cache and see what happens in the debug log.

Deleting large directories is not fast, and if i want to do with rclone still running, it's likely i would delete files that are in use.

I'm trying to have no downtime for this, as even restarting rclone with a big cache is costly (sometimes 5min+). I have systemd set to reboot if rclone crashes so this would also trigger a reboot... this could be disabled of course but idk if just systemctl daemon-reload is enough for that

Rclone should have a way to purge folders from the cache, and use async delete using full path to each file/folder to do it quickly. I searched for a rclone rc command for this but seems there is none

@ncw what is really the recommended way to delete a very large folder from cache quickly with rclone running, with minimal chances to cause rclone?

I know how to fix this now but haven't had the chance to implement yet!

An RC call for this would be ideal.

If you delete files from the cache even in use ones rclone should cope. Just don't delete files that haven't been uploaded yet.

Maybe you'd like to sponsor development of one or other of those features?

1 Like

ok i'll send a message

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.