Evict big files in cache instead old ones

What is the problem you are having with rclone?

I have mount remote with a lot of small files and some are large. Is possible to keep cached small files and discard the large ones first? Or have different --vfs-cache-max-age.
Or change the behavior in --vfs-cache-max-size or --vfs-cache-min-free-size. The default is "rclone will start with files that haven't been accessed for the longest". If a large file is written in the cache, all the small files get deleted

Run the command 'rclone version' and share the full output of the command.

rclone v1.68.1

  • os/version: Microsoft Windows 11 Pro 23H2 (64 bit)
  • os/kernel: 10.0.22631.4317 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.23.1
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Microsoft OneDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount Shares: S: --network-mode --rc --vfs-cache-mode full --cache-dir I:\rcloneCache --no-modtime --no-checksum --dir-cache-time 1w --poll-interval 1h --vfs-cache-max-age 2d --vfs-cache-poll-interval 1h --vfs-fast-fingerprint --vfs-cache-min-free-space 20Gi

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[Shares]
type = combine
upstreams = Prog=Prog: SAn=SAn:

[Prog]
type = onedrive
delta = true
client_id = XXX
client_secret = XXX
token = XXX
drive_id = XXX
drive_type = documentLibrary

[SAn]
type = onedrive
delta = true
client_id = XXX
client_secret = XXX
token = XXX
drive_id = XXX
drive_type = documentLibrary

Nope. It would require optional different cache flushing strategy and such feature does not exist today.

This was added by @ncw to fix VFS Cache: control caching of files by size · Issue #4110 · rclone/rclone · GitHub but never got merged: vfs: implement LRU-SP cache for more intelligent cache replacement - … · rclone/rclone@6f53463 · GitHub

2 Likes