Have you tested it?
I have tested this extensively before posting. Rclone will flush ram at the 2 MIN mark without fail. So there should be a way to adjust that 2 MIN.
I'm sure there is some GO configuration but there's really no reason to set 1GB buffer. If you are using cache mode full, it really adds no value to run a large buffer.
Cache mode off is the mode that's relevant to this post, Sorry didn't make that clear in the first place.
Yes there's a reason for it. Its extremely beneficial to load 1GB onto ram rather than disk, speed wise and wear of ssd, also the ease of it when I think about it.
GO configuration
If this is really the issue, how do i build it so I could clear it like having --vfs-cache-poll-interval 1s
It only used with cache modes so that's why I was asking.
If you aren't using a cache mode, it's not writing anything to disk regardless of the buffer size.
For cloud remotes, the slowest point is generally the fetch of the data so memory/SSD/spinning disk are generally fine for cache.
I personally use the same SSD for a few years now and if I only get 5-6 years of life instead of 10, odds are it'll be replaced well before that anyway so I don't mind the use as that's my stance but that's just me.
I see, I've tested all configurations for the most part, SSDs and ramdisk. For 1s cache is mostly for ramdisk.
Although after awhile I find its not necessary as doing cache OFF and 1GB buffer-size just works for my use case. Actually its the optimum setting for me as opposed as yours.
Its working great right now. Just that there's no option to decrease that 2MIN ram flush. This might be problematic as an overlap of multiple videos opening back to back within that 2MIN will crash the system.
Sounds like you are pushing your system to hard with a 1GB buffer.
With this setting rclone might use 1GB per open file + 1GB per file being down- or uploaded (up to --transfers) that is you should have something like 5-10GB of free physical RAM when the mount is idle.
I'm testing this config, --buffer-size 1G --vfs-read-chunk-size 1G
On a 500MB video file.
Quick question,
When playback is paused, network and ram are monitored until both are idle and used respectively. So I assume the whole 500MB video files is in ram as I see network usage is being used so its not a "dummy" cache.
However, when I scroll/scrub that 500MB video forward a few seconds or mins, it restarts the buffer again and I see RAM was released and network usage starts again.
Not sure, but I think the buffering will stop after a short while (app. 1 second) when you stop reading from the file, so rclone may not fill the entire buffer (to save RAM, network and download quotas).
rclone may therefore need to read from your remote if you scroll/scrub beyond the content of the buffer, and probably drops the current buffer memory and then makes a new buffer starting at the new starting position. This makes good sense in most scenarios.
I'm trying not to use any disk space with its drawbacks. Also I feel downloading to RAM is very fast with gigabit pipe.
When using --vfs-cache-mode=full and ramdisk, I can scrub whatever is loaded on the ram perpetually, the same 500MB is loaded to ram until I exit the client or stop playback.
It can't keep up because the buffer gets drop when the file is closed generally and if you are direct playing, if you pause, it'll close the file, drop the buffer and not be great.
What you actually want if you have those type of issues are cache mode full and a large read ahead a buffer.
I've done oodles of testing with plex, playback, transcoding, direct playing, every setting, tweak, adjustment and I landed on cache mode full with a SSD and haven't looked back. You can snag a small 1TB SSD for 40 or 50 bucks and it'll last for years of every day use as a cache disk.
To @Ole's point, several months ago I added --buffer-size 0 to my mount command, and I have yet to see a single issue with it. I've watched the highest bit-rate files with zero buffering, even after pausing. All direct play, of course. Ping to my Plex server is in the low 70s. Just goes to show that there are many variables when it comes to streaming media