Vfs mount cache not reseting when file sytem is full

rclone vfs cache is not always reseting the cache directory when file system runs out of space.

/usr/bin/rclone mount2 COS: /snapshots/mnt -o rw,nodev,noatime,suid --uid 64055 --gid 119 --allow-other --verbose --syslog --buffer-size=16M --vfs-read-chunk-size=4M --vfs-cache-mode=full --vfs-cache-max-size=3G --cache-read-retries 20 --dir-cache-time 30s --cache-dir=/rclone_cache

Note: /rclone_cache is a 4G tmpfs filesystem

rclone version
rclone v1.53.3-DEV

  • os/arch: linux/amd64
  • go version: go1.13.8

The rclone config contents with secrets removed.

Paste config here

region = us-south
location_constraint = us-standard
acl = private
endpoint =
env_auth = false
provider = COS
secret_access_key =
type = s3
access_key_id =

Right now we do not have much for logging as the problem is happening in our production environment where we not have logging turned on.
Since this a shared fuse mount it also hard to isolate what pattern is causing the cache to not flush.

The problem shows up that the mount ends up just returning IO errors for any file access.

This is a realonly mount we we are able to manually reset the cache by rm -rf vfs vfsMeta the mount will remain unusable until we manually clear the cache.

It's not a bug, but however your settings / use case are.

You didn't share a command/version so it's hard to guess.

rclone mount

That explains it there.