Rclone mount random slow speeds

I've been dealing with this for at least three months now. So far, I've only blamed my ISP's terrible peering to my remote server, but lately I'm not so sure anymore. I'm assuming most of you guys host Plex at home, so for you it could be either bad peering to Google's servers, or it could be something with Plex itself. I can't rule out the latter after reading more and more reports of Plex users having issues with buffering. And these are guys with local media/servers.

I have yet to setup a local Plex server to see if I get better results. Up until a year ago, I watched everything through MPC-HC on a local HTPC (with an Rclone mount), and I never had a single issue. When I switched to a SHIELD and Plex (remote), I also had zero buffering with even the largest files. Wasn't until about three months ago that my troubles started, and since then I haven't been able to watch any UHD REMUX without constant buffering.

That's my case. Rclone mount on HTPC, watching through Kodi. Plex does not enter the picture at all, for me.

1 Like

You're ashlar42 on doom9, I take it? :slight_smile:

I used to be very active in the madVR community until about a year ago when, like I said, I switched from PC to SHIELD.

Yes, that's me. I am going to check what DNS I have in use on the HTPC and see whether I can at least minimize the random instances.

Yesterday two 4K HDR episodes played back flawlessly... The seemingly total randomness of this makes it a nightmare to debug.

Rclone will tend to re-use HTTPS connections if they are in the http connection cache. This makes things much quicker as it can take a long time to establish an HTTP/HTTPS connection.

It is possible to clear the http connection cache

Rclone never does this, maybe it could do it on an rc call. Note that it only closes idle connections, so you'd have to somehow stop all existing stuff. If you were running a mount that is plausible though.

Ok, I am not sure if I phrased my question correctly. What I mean is that I have seen the speed going down while accessing (streaming) the same video file. This is why I was asking whether rclone made a new request for every chunk download. Your answer would suggest it doesn't but, on the other hand, I'm even more baffled about what might be happening behind the scenes with Google... :frowning:

What I don't understand (and I'm no network engineer so that's not surprising) is if it's an endpoint problem, why are we all reporting that we get slowed to roughly the same speed of ~300k/s. Surely there'd be more variance from people in different parts of the world? Again I don't know how these things work, but these results suggest to me that something is being capped at that speed rather than the endpoint just being slow.

Please explain to me how I am wrong :wink:

Rclone will divide the stream into chunks as specified by these parameters

  --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
  --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

So each chunk will require a new request.

At every request there is an opportunity to hit a new server, either by

  1. rclone picks a different endpoint IP to connect to
  2. the google load balancer chooses a different server to take the request

We have a small amount of control over 1. and no control at all about 2.

1 Like

Great info on the behind the scenes. Thanks heaps.
It tracks with my experience where I was suspecting that it would sometimes speed up after a while (with no proof). Or sometimes it's seemingly fast in the begining, but if I scrub to another point in the video file it suddenly slows down, it would be a new chunk so presumably a new endpoint.

I've updated my config to this and will see how I go. Hopefully this isn't too small to cause problems of it's own.

    --vfs-read-chunk-size 16Mi \
    --vfs-read-chunk-size-limit 128Mi \

Ok, yeah, being familiar with the chunk concept, that's what I suspected could have been happening. Thanks for confirming.

That is a good experiment to try. No idea if it will help, but if you hit a slow server then it will only be for 128MB of data.

I've been using --vfs-read-chunk-size 1M for about a year now with no issues to report.

With 4K video material???

Yeah, that's not a problem, as it ramps up real quick. I only went with something so low due to my Plex server analyzing (non-extensive) thousands of files during maintenance every night. So, having a tiny initial chunk size comes in handy when only reading a small part of a file. For a normal use case, the default should be fine.

Sorry, I had missed this. For me it's getting slowed down to about 20Mbps. Which is mainly fine for HD material (especially TV shows) but clearly not cutting it for 4K.

I've been keeping a log of the IPs that seem to be consistently running slow for me - of which I've currently found 3 of the 10 or so that I've connected to. Is there away to set rclone to ignore these IPs?

Alternatively, can rclone mount be made to use multiple threads or something to increase your chances of finding at least one fast endpoint?

I also tried --vfs-chunk-size options and it hasn't seemed to want to change endpoints once it connects to one.

Just to get a better idea, could everyone here perhaps state where they're connecting from and which region they picked when they set up their Workspace account. If I remember correctly, there was an option initially during account creation and also to change it later on, if necessary. I could be wrong, though.

For me, it's California and US (I guess), and I do not believe I've experienced crazy slowdowns like some of you mentioned.

Was indeed a good test.

Watched a file's download progress, it was fast for the first 16MB, then slow for the next 32MB, then fast again for the rest of the time I was watching.

Something I noticed is there is a noticable pause between chunks of a second or two. So it'll be a matter of optimising a larger max chunk for greater performance over larger parts of the file, against smaller chunks incase you hit a bad endpoint so the bad performance wont last as long.

I might try VBB's 1M starting chunk and having a larger max chunk.

On that note. VBB, I dont recall the region I used, but I would have chosen Australia as that's where I and my server lives (in a datacentre in Sydney somewhere).

1 Like

I'm in New Zealand - don't have our own datacenters here (yet). Sydney endpoints are ones I see the most often, though occasionally hkg (Hong Kong I assume)?

1 Like

I'm in Australia as well. I have decreased the chunk size and will see if that has any effect

1 Like