I've been dealing with this for at least three months now. So far, I've only blamed my ISP's terrible peering to my remote server, but lately I'm not so sure anymore. I'm assuming most of you guys host Plex at home, so for you it could be either bad peering to Google's servers, or it could be something with Plex itself. I can't rule out the latter after reading more and more reports of Plex users having issues with buffering. And these are guys with local media/servers.
I have yet to setup a local Plex server to see if I get better results. Up until a year ago, I watched everything through MPC-HC on a local HTPC (with an Rclone mount), and I never had a single issue. When I switched to a SHIELD and Plex (remote), I also had zero buffering with even the largest files. Wasn't until about three months ago that my troubles started, and since then I haven't been able to watch any UHD REMUX without constant buffering.
Rclone will tend to re-use HTTPS connections if they are in the http connection cache. This makes things much quicker as it can take a long time to establish an HTTP/HTTPS connection.
It is possible to clear the http connection cache
Rclone never does this, maybe it could do it on an rc call. Note that it only closes idle connections, so you'd have to somehow stop all existing stuff. If you were running a mount that is plausible though.
Ok, I am not sure if I phrased my question correctly. What I mean is that I have seen the speed going down while accessing (streaming) the same video file. This is why I was asking whether rclone made a new request for every chunk download. Your answer would suggest it doesn't but, on the other hand, I'm even more baffled about what might be happening behind the scenes with Google...
What I don't understand (and I'm no network engineer so that's not surprising) is if it's an endpoint problem, why are we all reporting that we get slowed to roughly the same speed of ~300k/s. Surely there'd be more variance from people in different parts of the world? Again I don't know how these things work, but these results suggest to me that something is being capped at that speed rather than the endpoint just being slow.
Rclone will divide the stream into chunks as specified by these parameters
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
So each chunk will require a new request.
At every request there is an opportunity to hit a new server, either by
rclone picks a different endpoint IP to connect to
the google load balancer chooses a different server to take the request
We have a small amount of control over 1. and no control at all about 2.
Great info on the behind the scenes. Thanks heaps.
It tracks with my experience where I was suspecting that it would sometimes speed up after a while (with no proof). Or sometimes it's seemingly fast in the begining, but if I scrub to another point in the video file it suddenly slows down, it would be a new chunk so presumably a new endpoint.
I've updated my config to this and will see how I go. Hopefully this isn't too small to cause problems of it's own.
Yeah, that's not a problem, as it ramps up real quick. I only went with something so low due to my Plex server analyzing (non-extensive) thousands of files during maintenance every night. So, having a tiny initial chunk size comes in handy when only reading a small part of a file. For a normal use case, the default should be fine.
I've been keeping a log of the IPs that seem to be consistently running slow for me - of which I've currently found 3 of the 10 or so that I've connected to. Is there away to set rclone to ignore these IPs?
Alternatively, can rclone mount be made to use multiple threads or something to increase your chances of finding at least one fast endpoint?
I also tried --vfs-chunk-size options and it hasn't seemed to want to change endpoints once it connects to one.
Just to get a better idea, could everyone here perhaps state where they're connecting from and which region they picked when they set up their Workspace account. If I remember correctly, there was an option initially during account creation and also to change it later on, if necessary. I could be wrong, though.
For me, it's California and US (I guess), and I do not believe I've experienced crazy slowdowns like some of you mentioned.
Watched a file's download progress, it was fast for the first 16MB, then slow for the next 32MB, then fast again for the rest of the time I was watching.
Something I noticed is there is a noticable pause between chunks of a second or two. So it'll be a matter of optimising a larger max chunk for greater performance over larger parts of the file, against smaller chunks incase you hit a bad endpoint so the bad performance wont last as long.
I might try VBB's 1M starting chunk and having a larger max chunk.
On that note. VBB, I dont recall the region I used, but I would have chosen Australia as that's where I and my server lives (in a datacentre in Sydney somewhere).