Hey all,
Recently ran into this issue when running sync between 2 of my completely unrelated Gdrives:
2019-08-02 19:22:51 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)
Command:
rclone sync TD2: TD3: -P --fast-list --track-renames --v
(this is with server_side_across_configs = true enabled, this had been working perfectly up until this point)
I'm certain I didn't hit the 750GB/day limit also as I was aware of this. Besides this would produce a different error right? It also hit 2 different drives at about he same time, even though I had barely used one of them for days.
Also, I am certain this is not API rate limiting because after extensive testing I start getting ratelimited either immediately or almost immediately even when setting extremely low --transfers, --checkers, --tpslimit 1 and even messing with forcing the drive pacer lower (not sure if this is the same as tpslimit technically). Google API metrics also indicate that I am nowhere close to maxing out. I barely spike to 4 requests a second at worst when not using extreme tpslimit and even less when I do. These drives now seem almost completely locked down. I can still seem to get listings from them and even transfer a file or two on occasion if I leave it running, but it's operating at like 1 request pr 10 seconds or something... unusable.
And yes, I am using my own Oauth. A separate one for each of the drives I use, so there is no shared quota. Nor have the drives in question been used by anything else than the sync operation.
So after much more research I come across this which I now strongly suspect is the cause of the problem. Supposedly this is a quote given by some google rep at some point (taken from an old and very long related thread):
Also, in June 2017 a quota for creating blobs of 100Gb/day was established. It’s possible to create files of bigger size, but after this quota is exceeded all subsequent blob creation operations will fail.
After reading this I of course checked to see if I had any files above 100GB, and indeed I did on the affected drives. 2 in fact (but they almost certainly were transferring at the same time, which probably explains it). It also makes sense to me that the problem started approximately at or shortly after these got transferred. So if this hypothesis is correct then >100G files are basically poison to Gdrives that will render them useless for the next 24h(?). This seems like it should be avoided at all costs for obvious reasons. A very curious limit given the upload max is 750G - but who am I to question the infinite wisdom of google...
This leaves me to a few questions:
-
- Can anyone else please confirm if this is an actual limitation?
-
- Assuming that it is, should this not be mentioned somewhere in the Drive backend documentation? It seems to me like it would make sense to also mention the 750G upload limit (I think I have even seen this officially stated as a limit at this point besides it being experimentally verified by several users). I understand the hesitation to put info in the docs which aren't hard verified info, or even a limit in rclone itself, but it seems like it would be very useful information for users to have because this is one of those things that you will otherwise need to do a lot of searching to find out about. I don't expect the average user would know how to debug this problem themselves.
-
- Shouldn't be we have some mechanism to either block or deal with >100G files to prevent this from happening and causing grief to users why don't understand why things suddenly stop working? The ideal solution would obviously be to split the files somehow, but as a stopgap measure, even throwing an error or something seems appropriate. Even if you are aware of this limitation it can be easy to trip by mistake when handling large folder structures.
@ncw Would appreciate your thoughts on this, and tell me if you want an issue for it.
Config details below for completeness, even though I think it's not relevant:
[TD2]
type = drive
client_id = [REDACTED]
client_secret = [REDACTED]
scope = drive
token = [REDACTED]
team_drive = [REDACTED]
upload_cutoff = 256M
chunk_size = 256M
server_side_across_configs = true[TD3]
type = drive
client_id = [REDACTED]
client_secret = [REDACTED]
scope = drive
token = [REDACTED]
team_drive = [REDACTED]
upload_cutoff = 256M
chunk_size = 256M
server_side_across_configs = true