This is also happening to me. I'm mounting the gdrive in a Hetzner server(Google and Hetzner have over 400 Gbits direct/private peering). As the OP says it seems like random. Sometimes i get 5 Gbps download speed sometimes it slows down to 40-50 Mbps. I've been using this system for about two years with no issues but for the last couple weeks it's getting annoying. I blamed everything else other than Google first but it seems Google is the culprit.
I think they're introducing new limits but this time as download throttling. Uploads not effected. Btw throttling is not about IPs it's per whole accounts. I've tested all possible combinations. Once throttled, throttled everywhere and for every user. I hope they disclose how the throttle/limit works to get a workaround. And once throttled every user in drive gets throttled to ~50 Mbps in my case. At least they don't throttle it to 50 Mbps globally:)
Edit: It's the mount that gets throttled to 50 Mbps, when i do a direct copy with rclone copy i'm getting throttled to ~160-200 Mbps(it's getting as high as 5-8 Gbps normally). I think it's because copy command implements multi part/threaded downloading.
I'm experiencing the same issue in South America using two different (edu) accounts. Do not think this is region specific.
I also noticed those changes in both my accounts.
I've tried this and other suggested mount configurations without much success.
I thought the issue might be related to the upcoming changes in storage policy for edu accounts (a few schools managed to postpone the deadline to July 2023 but unlimited storage should indeed be ending today) but I'm not sure if OP or other users who reported the issue are also using edu accounts. In any case, attempting download a second or third time usually fixes it.
Being multithreaded drastically reduces your chances of getting connected to one of the slow IPs at google's end (at least for Australia users) but the trade off is that you have to use rclone 1.52.3 and wait for the file to fully download before it plays.
I guess we'll either have to hope google fixes their shit or we beg/implore/pay/threaten ncw to prioritize re-adding this feature as quickly as possible
Adding here -> have similar issues in South AFrica (although there was rumours to be data center limitations as well earlier this year, resulting in youtube buffering, which had me believe that it was related to the slowing down of drive/rclone)
Will be following to see if anyone finds a workaround.
*just checking -> what should a single threaded download speed be directly via https from google drive on a browser.
I am getting +-100mbps ( line max is 500mbps)
Subscribed to that and gave a thumbs up.
I still am not convinced Google is doing this on purpose. At times a simple skip ahead invokes a new request, same IP, same client-id, same client-secret and then it goes full speed for hours.
If Google wanted to throttle (which seriously would downgrade the value of their offer for businesses), I am sure they could devise something far more efficient/foolproof.
That would for a non mount but not on a mount as the mount is only serving what's being asked for as a copy / sync to a remote would be a multi threaded download. So cache off, just means it reads what's being asked for.
This thread feels bad to read to as it's a shame Google Support won't give you any details / help either
When I read things like this, I'm happy I flipped over to Dropbox
Oh, ok... re-reading docs I guess it only works when saving stuff on a hard drive (sparse files, etc.). I thought it could handle that in RAM, but I was most likely wrong.
I was testing a bit... first couple of downloads full speed. Third download capped. Multiple downloads after that capped (always to 20Mbps in my case, Italy, on a gigabit connection that gets close to being saturated normally).
I restarted rclone service, still capped. Put cache-mode off in the configuration, restarted the service. Always top speed on multiple files. I deluded myself of having found a temporary workaround...
Oh, and regarding Google... I checked all possible console information on Google Developers website. There are no errors, no quotas, nothing out of the ordinary. I'm actually far, far below any quotas for which I have visibility.
Is it possible for a mount to read ahead (i.e. 3 additional concurrent threads writing to RAM) a file for streaming ? This would make high bitrate 4K blurays watchable. One thread throttled to 50 Mbps, 4 concurrent threads could make it 200 Mbps.
I can confirm also seeing the 300-355KB/s issue here when connecting to GDrive Sydney servers. Started a few weeks back. If it helps, it is not isolated to one ISP, experiencing on multiple connections including a server located in an Equinix DC with direct peering to Google.
I've identified 4 IPs that I connect to in the Australia region that are seemingly capped to around 350k/s. You connect to any of them and gdrive is slow. Connect to any of the other 6 that I've identified and everything performs just fine.
Is there any way to make rclone rc reset the connection if it detects it connects to a certain IP or fails to attain a certain throughput?
In the meantime the least frustrating method I've come up with is to have a large --vfs-read-ahead setting so that it downloads the whole file as quickly as possible, but does require stopping and starting the video if you connect to a slow ip.
Since there are multiple IPs returned, your server will be doing DNS round robin based on the DNS TTL. e.g. randomly choosing an IP on each lookup. Some of those IPs work fine, others do not. I ended up testing all of them and I isolated the problematic one down to the IP address 172.217.24.42 being the issue. 172.217.24.42 had a max download of about 25mbps.
As a short term fix, I have statically assigned this DNS name in my hosts file (/etc/hosts or C:\Windows\System32\drivers\etc ) which seems to have resolved the issue.
A couple of things to keep in mind, Google is a CDN network, so it will move your data to the closest point to where the server is physically located to reduce congestion on their WAN links. So for example, when you upload to Google drive make sure it is done in the same country that the server is located in. Dont for example, use a VPN to USA and upload to your Google Drive, when your server is in Sydney. This will put your data in the USA and your server in Australia. This will cause additional latency and more importantly decrease the available bandwidth. Google will after a while realise the demand for the traffic is in Australia and move your data to Australia to optimise their WANs.
Another thing is, I changed my user agent to spoof Chrome,
Mozilla/5.0 (X11; CrOS x86_64 8172.45.0)
This will make it look like your data is coming from a Chrome browser and not rclone, which Google may be data shaping.
I was thinking along this line as well. I've never assigned DNS names in my hosts file in UNRAID. Can you post an example of your host file as i want to edit mine.