Rclone dir cache remains when mount drops

I have a rather unique issue and I'm not sure if it's intended behavior or not.

I have a mount with static, read only content that doesn't change. To improve performance, I have rclone mount this to a new mount point with a high dir cache value.

Rclone version 1.70.2 (has happened with last 3 or so versions as well.)

rclone mount /mnt/src /mnt/dst
--uid=1000 --gid=1000 --umask=002
--allow-other
--read-only
--timeout=1h
--dir-cache-time=120h
--attr-timeout 120h
--vfs-cache-max-age=4h
--vfs-cache-min-free-space=80G
--vfs-read-chunk-size-limit=128M
--vfs-read-chunk-size=8M
--vfs-fast-fingerprint
--vfs-cache-mode=full
--links
--cache-db-purge

This works great. Once in a blue moon, if I need to restart the mount, or some network connectivity issue happens which causes the mount to drop and for whatever reason and not remount properly, so the doing an ls /mnt/dst results in Transport endpoint not connected.

To try and automate remediation of this, I have a script that fires every minute that looks for /mnt/dst/mounted.flag. if this file doesn't exist it forces a fusermount -uz /mnt/dst then remounts. To clarify this is when the rclone dst drops, not the source.

However, to my surprise, doing an ls /mnt/dst/mounted.flag returns the file listing even when an ls of the root, /mnt/dst returns Transport endpoint not connected. I'm guessing this is the dir cache staying alive?

I'm not sure how to reproduce this consistently, by that I mean make the mount fail while the rclone process is still running, otherwise I would provide those steps. I do know if I have rclone set as a systemd service, with the execstop doing fusermount -uz /mnt/dst, if I restart the service sometimes it restarts, sometimes it yields this failed state.

So I guess my questions are two told:

  1. Is it still returning the file because of dir-cache and if so is there a better way to do a health check against the mount?
  2. Is the systemd service restart failure a common/known issue and any known remedies?

Thanks in advance.

can you post the service file?


afiak, based on what you posted, does nothing, can be removed?


can you post a rclone debug log?

Thank you for the reply. Based on this post:

I thought that flag would clear any vfs cache on the system on start of the mount. Is this not the case?

Wrt to debug log I can try to capture but as I noted it somewhat random and difficult to reproduce.

Systemd service:

Description=backup Daemon
After=multi-user.target
[Service]
Type=notify
ExecStart=/usr/bin/rclone mount /mnt/src /mnt/dst \
--uid=1000 --gid=1000 --umask=002 \
--allow-other \
--read-only \
--timeout=1h \
--dir-cache-time=120h \
--attr-timeout 120h \
--vfs-cache-max-age=4h \
--vfs-cache-min-free-space=80G \
--vfs-read-chunk-size-limit=128M \
--vfs-read-chunk-size=8M \
--vfs-fast-fingerprint \
--vfs-cache-mode=full \
--links \
--cache-db-purge
ExecStop=/bin/fusermount -uz /mnt/dst > /dev/null
TimeoutSec=120
Restart=always
KillMode=process
User=0
Group=0
[Install]
WantedBy=multi-user.target```

Yes but not for VFS cache you are using. It is an old flag for deprecated cache overlay remote.

the only way to clear files in the vfs file cache is to delete them using rm


well, there are two vfs caches, check out my summary of vfs caches

Got it. Thank you. But is it expected behavior for dir cache to remain and respond when the mount drops?

no. if you can show different, post the details...

Will do. What would be the best way to illustrate this, mount with debug logging to a file, then try to reproduce?

not sure what you mean by source and dest?

afiak, with rclone mount, there is not really a concept of source and dest.
rclone mounts a remote as a local directory.


not sure what you mean by drop?
as long as rclone executable is running, so too, are the caches.