Rclone mount random slow speeds

I've been keeping a log of the IPs that seem to be consistently running slow for me - of which I've currently found 3 of the 10 or so that I've connected to. Is there away to set rclone to ignore these IPs?

Alternatively, can rclone mount be made to use multiple threads or something to increase your chances of finding at least one fast endpoint?

I also tried --vfs-chunk-size options and it hasn't seemed to want to change endpoints once it connects to one.

Just to get a better idea, could everyone here perhaps state where they're connecting from and which region they picked when they set up their Workspace account. If I remember correctly, there was an option initially during account creation and also to change it later on, if necessary. I could be wrong, though.

For me, it's California and US (I guess), and I do not believe I've experienced crazy slowdowns like some of you mentioned.

Was indeed a good test.

Watched a file's download progress, it was fast for the first 16MB, then slow for the next 32MB, then fast again for the rest of the time I was watching.

Something I noticed is there is a noticable pause between chunks of a second or two. So it'll be a matter of optimising a larger max chunk for greater performance over larger parts of the file, against smaller chunks incase you hit a bad endpoint so the bad performance wont last as long.

I might try VBB's 1M starting chunk and having a larger max chunk.

On that note. VBB, I dont recall the region I used, but I would have chosen Australia as that's where I and my server lives (in a datacentre in Sydney somewhere).

1 Like

I'm in New Zealand - don't have our own datacenters here (yet). Sydney endpoints are ones I see the most often, though occasionally hkg (Hong Kong I assume)?

1 Like

I'm in Australia as well. I have decreased the chunk size and will see if that has any effect

1 Like

Came to the forums to say im having exactly the same issues. Australia. Will decrease chunk size and report back as well.

Looking at my api console i can see there has been a not insignificant increase in latency as well. Not sure if this is related.

That's four users in the Australia region so far...

I haven’t actually gotten around to changing my mount options yet, ill do it tommorow. Still experiencing random slow speeds.

Here’s my mount script from Guide: How To Use Rclone To Mount Cloud Drives And Play Files - Plugins and Apps - Unraid

Would you recommend me adding --buffer-size 32M or 128M as im using full cache mode with a 1TB NVME drive with 500GB allocated to VFS cache?

create rclone mount

rclone mount \
$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
--allow-other \
--dir-cache-time $RcloneMountDirCacheTime \
--attr-timeout $RcloneMountDirCacheTime \
--log-level INFO \
--poll-interval 10s \
--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
--drive-pacer-min-sleep 10ms \
--drive-pacer-burst 1000 \
--vfs-cache-mode full \
--vfs-cache-max-size $RcloneCacheMaxSize \
--vfs-cache-max-age $RcloneCacheMaxAge \
--vfs-read-ahead 1G \
--bind=$RCloneMountIP \

rcloners could test using a vpn.

This is also happening to me. I'm mounting the gdrive in a Hetzner server(Google and Hetzner have over 400 Gbits direct/private peering). As the OP says it seems like random. Sometimes i get 5 Gbps download speed sometimes it slows down to 40-50 Mbps. I've been using this system for about two years with no issues but for the last couple weeks it's getting annoying. I blamed everything else other than Google first but it seems Google is the culprit.

I think they're introducing new limits but this time as download throttling. Uploads not effected. Btw throttling is not about IPs it's per whole accounts. I've tested all possible combinations. Once throttled, throttled everywhere and for every user. I hope they disclose how the throttle/limit works to get a workaround. And once throttled every user in drive gets throttled to ~50 Mbps in my case. At least they don't throttle it to 50 Mbps globally:)

Edit: It's the mount that gets throttled to 50 Mbps, when i do a direct copy with rclone copy i'm getting throttled to ~160-200 Mbps(it's getting as high as 5-8 Gbps normally). I think it's because copy command implements multi part/threaded downloading.

I'm experiencing the same issue in South America using two different (edu) accounts. Do not think this is region specific.

I also noticed those changes in both my accounts.

I've tried this and other suggested mount configurations without much success.


I thought the issue might be related to the upcoming changes in storage policy for edu accounts (a few schools managed to postpone the deadline to July 2023 but unlimited storage should indeed be ending today) but I'm not sure if OP or other users who reported the issue are also using edu accounts. In any case, attempting download a second or third time usually fixes it.

No im using a payed business account. (workspace enterprise)

I am using workspace enterprise

So I have an inelegant workaround that seems to be functional so far, but it's far from ideal.

Back in the dark, dark days of pre rclone 1.53, multithreaded downloads worked with cache-mode writes and above. Unfortunately this means the whole file downloads locally before you start playing it which is why it was scrapped according to this post "rclone mount" does not work with multi thread downloading using "--vfs-cache-mode full" with version v1.55.1 and v1.54.1 - #2 by ncw.

Being multithreaded drastically reduces your chances of getting connected to one of the slow IPs at google's end (at least for Australia users) but the trade off is that you have to use rclone 1.52.3 and wait for the file to fully download before it plays.

ncw has plans to bring this functionality back in a future version of rclone - Support multi-threaded downloads when downloading a file to the cache · Issue #4760 · rclone/rclone · GitHub

I guess we'll either have to hope google fixes their shit or we beg/implore/pay/threaten ncw to prioritize re-adding this feature as quickly as possible :slight_smile:

1 Like

Adding here -> have similar issues in South AFrica (although there was rumours to be data center limitations as well earlier this year, resulting in youtube buffering, which had me believe that it was related to the slowing down of drive/rclone)

Will be following to see if anyone finds a workaround.
*just checking -> what should a single threaded download speed be directly via https from google drive on a browser.
I am getting +-100mbps ( line max is 500mbps)

Subscribed to that and gave a thumbs up.
I still am not convinced Google is doing this on purpose. At times a simple skip ahead invokes a new request, same IP, same client-id, same client-secret and then it goes full speed for hours.

If Google wanted to throttle (which seriously would downgrade the value of their offer for businesses), I am sure they could devise something far more efficient/foolproof.

A question… if we use --vfs-cache-mode off we go back to multithreaded downloads?

For pure video streaming purposes it can still work, it did for a long time, after all.

That would for a non mount but not on a mount as the mount is only serving what's being asked for as a copy / sync to a remote would be a multi threaded download. So cache off, just means it reads what's being asked for.

This thread feels bad to read to as it's a shame Google Support won't give you any details / help either :frowning:

When I read things like this, I'm happy I flipped over to Dropbox :slight_smile:

Oh, ok... re-reading docs I guess it only works when saving stuff on a hard drive (sparse files, etc.). I thought it could handle that in RAM, but I was most likely wrong.

I was testing a bit... first couple of downloads full speed. Third download capped. Multiple downloads after that capped (always to 20Mbps in my case, Italy, on a gigabit connection that gets close to being saturated normally).
I restarted rclone service, still capped. Put cache-mode off in the configuration, restarted the service. Always top speed on multiple files. I deluded myself of having found a temporary workaround...

Oh, and regarding Google... I checked all possible console information on Google Developers website. There are no errors, no quotas, nothing out of the ordinary. I'm actually far, far below any quotas for which I have visibility.

Is it possible for a mount to read ahead (i.e. 3 additional concurrent threads writing to RAM) a file for streaming ? This would make high bitrate 4K blurays watchable. One thread throttled to 50 Mbps, 4 concurrent threads could make it 200 Mbps.