I’m looking for some advice here. I have a fairly common setup with both Plex, Radarr and Sonarr using the same Google mount on the same VPS. I’ve testing and tuned the rclone cache mount and (separately) an rclone VFS mount. Both mount to the same location and any relative caches were cleared between tests.
With Rclone Cache: I’ve noticed slower start times for Plex streaming and I have uploads set to queue into a user-defined directory, “uploads.” If Radarr/Sonarr (moving forward, *arr) experiences a significant lag when multiple files are being moved at the same time from the local drive into the Google mount (via the upload cache). However, I also occasionally hit the Google 750G 24-hour limit and, when this occurs, the pending uploads remain queued until the upload is successful. No data is lost, though it may not appear on Google in the expected timeframe since Google is temporarily suspending my upload capability.
With Rclone and VFS: Noticeably faster start time, resume times, and better playback in Plex. I also have vfs-cache-mode=writes and the cache-dir /uploads set (uploads is completely removed and cleared between tests when switching between cache and VFS tests). The biggest issue with this configuration is that when I hit the Google 750F 24-hour limit, I lose data after the max-retries is hit. That is, *arr may see that a file is being upgraded from 720p to 1080p and the download completed. It deletes the old file and attempts to copy the new file, which fails. Finally, after it retries the max times and the vfs cache expires, the pending upload is removed (unlike the behavior with rclone cache) and the result is both the old and new files are missing from Google.
The question, then, is what is the best way to handle both uploads and streaming from the same rclone mount? Is there a setting I’m missing or another combination of configurations I should test?