Didn’t want to post an issue on GitHub, but I’m having a dilemma with the new cache that I feel many will have once it’s released.
Currently, if you have a cache remote mounted (let’s say for Plex) and you are actively getting new media (via whatever means that is), you can’t execute an
rclone copy or
rclone move to copy/move those files to the cache remote (you’ll get an error that the DB is in use).
The dilemma comes from the
info_age duration and handling writes. Sure you could attempt to move the files directly to the mount, but if there are no retries, this could fail and you’d never know it.
If you move the files to another remote that uses the same cloud provider, the cache won’t pick up those files (since they’re modified outside of the cache).
I’ve noticed that various applications that use the cache mount (Plex, Sonarr, Radarr) that do a lot of file operations will slow down to a crawl (and cause the whole system to really slow down).
So the dilemma is… How do we get our new files added to a cloud remote in a robust way and have these appear in the cache?
info_age too low (5m) and you’re rebuilding your cache all the time, which will probably result in bans and/or slow your system down pretty much all the time
info_age too high, and any move/copy operations you do on a different remote but the same cloud provider and you’ll never see these files until the cache expires for those particular directories that were modified / had files uploaded to them.
- Add files directly to the mount and you risk something happening and the file not retrying and not being uploaded, leaving you in a state of limbo.
--vfs-cache-mode writes help in this regard? I’m not sure.
Looking for ideas to get new media into my cloud provider and having them appear in the mounted cache remote as soon as they are uploaded.