I have a rather unique issue and I'm not sure if it's intended behavior or not.
I have a mount with static, read only content that doesn't change. To improve performance, I have rclone mount this to a new mount point with a high dir cache value.
Rclone version 1.70.2 (has happened with last 3 or so versions as well.)
rclone mount /mnt/src /mnt/dst
--uid=1000 --gid=1000 --umask=002
--allow-other
--read-only
--timeout=1h
--dir-cache-time=120h
--attr-timeout 120h
--vfs-cache-max-age=4h
--vfs-cache-min-free-space=80G
--vfs-read-chunk-size-limit=128M
--vfs-read-chunk-size=8M
--vfs-fast-fingerprint
--vfs-cache-mode=full
--links
--cache-db-purge
This works great. Once in a blue moon, if I need to restart the mount, or some network connectivity issue happens which causes the mount to drop and for whatever reason and not remount properly, so the doing an ls /mnt/dst results in Transport endpoint not connected.
To try and automate remediation of this, I have a script that fires every minute that looks for /mnt/dst/mounted.flag. if this file doesn't exist it forces a fusermount -uz /mnt/dst then remounts. To clarify this is when the rclone dst drops, not the source.
However, to my surprise, doing an ls /mnt/dst/mounted.flag returns the file listing even when an ls of the root, /mnt/dst returns Transport endpoint not connected. I'm guessing this is the dir cache staying alive?
I'm not sure how to reproduce this consistently, by that I mean make the mount fail while the rclone process is still running, otherwise I would provide those steps. I do know if I have rclone set as a systemd service, with the execstop doing fusermount -uz /mnt/dst, if I restart the service sometimes it restarts, sometimes it yields this failed state.
So I guess my questions are two told:
- Is it still returning the file because of dir-cache and if so is there a better way to do a health check against the mount?
- Is the systemd service restart failure a common/known issue and any known remedies?
Thanks in advance.