VFS or Cache for Streaming AND Uploading?

I’m looking for some advice here. I have a fairly common setup with both Plex, Radarr and Sonarr using the same Google mount on the same VPS. I’ve testing and tuned the rclone cache mount and (separately) an rclone VFS mount. Both mount to the same location and any relative caches were cleared between tests.

With Rclone Cache: I’ve noticed slower start times for Plex streaming and I have uploads set to queue into a user-defined directory, “uploads.” If Radarr/Sonarr (moving forward, *arr) experiences a significant lag when multiple files are being moved at the same time from the local drive into the Google mount (via the upload cache). However, I also occasionally hit the Google 750G 24-hour limit and, when this occurs, the pending uploads remain queued until the upload is successful. No data is lost, though it may not appear on Google in the expected timeframe since Google is temporarily suspending my upload capability.

With Rclone and VFS: Noticeably faster start time, resume times, and better playback in Plex. I also have vfs-cache-mode=writes and the cache-dir /uploads set (uploads is completely removed and cleared between tests when switching between cache and VFS tests). The biggest issue with this configuration is that when I hit the Google 750F 24-hour limit, I lose data after the max-retries is hit. That is, *arr may see that a file is being upgraded from 720p to 1080p and the download completed. It deletes the old file and attempts to copy the new file, which fails. Finally, after it retries the max times and the vfs cache expires, the pending upload is removed (unlike the behavior with rclone cache) and the result is both the old and new files are missing from Google.

The question, then, is what is the best way to handle both uploads and streaming from the same rclone mount? Is there a setting I’m missing or another combination of configurations I should test?

Man, if you are hitting 750GB daily limit with just Sonarr/Radarr, that’s impressive.

I moved from Cache and just use the vfs-chunk-size and no cache-mode. I just write directly to the mount and let it upload. I haven’t had any issues with uploading so far but I am running gigabit Fios and I seem to have very stable link.

Unfortunately, its not as impressive as you may think… I’m using it for some other things as well. The primary bandwidth-eater right now, though, is that I am systematically going through my series and movies and finding many at “below optimal” quality (ie, SD or 720 instead of 1080+ or stereo instead of 5.1+). There will be a time… hopefully soon… where this settles down.

With no cache, what happens if the upload fails x times (where x = the max tries)?

It would still be local on my box and I could just recopy it if it failed.

Out of a few weeks now, I’ve seen 0 failures though and only 3 retries.

Hey there, can you point to your settings? I have a stable mount in Windows with cache, but there’s so much folder hanging in explorer that I’m really interested in changing mount method.

My settings are a little changed as I couldn’t get my cache_tmp_upload nor the vfs cache writes doing what I wanted to do properly so I went back to a mergerfs combo.

I stuck with vfs reads as that’s quicker for starting everything:

ExecStart=/usr/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 96h \
   --vfs-cache-max-age 48h \
   --vfs-read-chunk-size 20M \
   --vfs-read-chunk-size-limit 1G \
   --buffer-size 10M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO

My goal was small chunk size and buffer as it gives it time to ramp up if needed.

2 Likes