What is the Google Drive Rate Limit applied to?

What is the problem you are having with rclone?

Not really a problem, just a questions so I can better understand how to adapt to it. I'm copying a bunch of data over to GDrive, and I've getting my photos sync'd now, so lots of relatively small files. This eventually caused me to hit a 403 userRateLimit, but it's not clear to me when you get a 403 if it's an API calls rate limit, a number of files rate limit (I read in several places that can happen in this kind of operation), or the per day quota? Or is it possible to be any of those but not possible to distinguish which?

My initial copy line for the larger files rate limited to 8670Kbit/s, and there were several hours in the last day that I wasn't maintain anywhere near that from the metrics on the VM, so I don't think I hit the data cap. My line for the smaller files, I upped the simultaneous transfers to 2, and tweaked the transfer rate to 10Mbit, figuring I wouldn't hit that anyway with lots of small files, but obviously something still got hit.

So I'm trying to understand what I should be tweaking to try and stay under rate limits but get the best performance out of this transfer(lots of small files). The transfer rate seems straightforward, as does the simultaneous transfers, mostly. But I'm not clear where the API calls are influenced. Are transactions in the tps limit equivalent to API calls? I saw another thread suggesting checkers be set very high, like 40, to speed up a situation like this. But I'm not clear what impact a checker has, or what it is even checking? Is having a lot going to dramatically up the API calls? Is it limited by tpslimit in that case? I'm happy to read about these more if there's a good post somewhere explaining it, the docs currently seem a little bit vague in these particulars, and how/what to consider adjusting, or what impact some of the less obvious things have on what problems, for example, bwlimit is obvious, tpslimit less so, since it's not clear what constitutes a transaction.

Thanks!

What is your rclone version (output from rclone version)

rclone v1.52.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

  • os/arch: linux/amd64
  • go version: go1.14.7

Which cloud storage system are you using? (eg Google Drive)

Google Drive target, OpenDrive source

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy od:photos/ sec:/ --multi-thread-streams 1 -P --transfers 2 --bwlimit 10240 --checkers=1 --tpslimit 10 --tpslimit-burst 20 --cutoff-mode=soft  --drive-stop-on-upload-limit --retries 1 --low-level-retries 1 --timeout 20s --ignore-existing --log-file copy_log.txt --ignore-case --include *.jpg -P -v

I'm currently re-running, but transfer limited to 1, and bandwidth limited to 8670, and fast-list enabled, otherwise same command now (no problems so far).

The rclone config contents with secrets removed.

[od]
type = opendrive
username = [redacted]
password = [redacted]

[gd]
type = drive
client_id = [redacted].apps.googleusercontent.com
client_secret = [redacted]
scope = drive
token = {"access_token":"[redacted]","token_type":"Bearer","refresh_token":"[redacted]","expiry"
:"2020-09-17T14:47:24.789114638Z"}

[sec]
type = crypt
remote = gd:/sec
filename_encryption = standard
directory_name_encryption = true
password = [redacted]

A log from the command with the -vv flag

Sorry, I messed up and started a new command before I renamed it like I usually do, but I captured the end of the log from my terminal. There weren't any errors until it quit (presumably because I set --drive-stop-on-upload-limit). I haven't been temp banned as far as I can tell (I have been able to run commands, and it's only been a few hours).

2020/09/17 07:07:49 INFO  : !CloudMove/!Archives/Mz1/WB1/sh02.jpg: Copied (new)
2020/09/17 07:07:49 INFO  : !CloudMove/!Archives/Mz1/WB1/sh03.jpg: Copied (new)
2020/09/17 07:07:50 INFO  : !CloudMove/!Archives/Mz1/WB1/sh04.jpg: Copied (new)
2020/09/17 07:07:51 INFO  : !CloudMove/!Archives/Mz1/WB1/sh05.jpg: Copied (new)
2020/09/17 07:07:51 INFO  : !CloudMove/!Archives/Mz1/WB1/sh07.jpg: Copied (new)
2020/09/17 07:07:52 INFO  : !CloudMove/!Archives/Mz1/WB1/sh10.jpg: Copied (new)
2020/09/17 07:07:53 INFO  : !CloudMove/!Archives/Mz1/WB1/sh11.jpg: Copied (new)
2020/09/17 07:07:53 INFO  : !CloudMove/!Archives/Mz1/WB1/sh12.jpg: Copied (new)
2020/09/17 07:07:53 INFO  : !CloudMove/!Archives/Mz1/WB1/sh15.jpg: Copied (new)
2020/09/17 07:07:54 INFO  : !CloudMove/!Archives/Mz1/WB1/sh17.jpg: Copied (new)
2020/09/17 07:07:55 INFO  : !CloudMove/!Archives/Mz1/WB1/sh18.jpg: Copied (new)
2020/09/17 07:07:55 INFO  : !CloudMove/!Archives/Mz1/WB1/sh19.jpg: Copied (new)
2020/09/17 07:07:55 INFO  : !CloudMove/!Archives/Mz1/WB1/sh26.jpg: Copied (new)
2020/09/17 07:07:56 INFO  : !CloudMove/!Archives/Mz1/WB1/sh27.jpg: Copied (new)
Errors:                68 (fatal error encountered)
Checks:             11652 / 11652, 100%
Transferred:         6012 / 16013, 38%
Elapsed time:   1h45m41.7s

2020/09/17 08:24:40 Failed to copy with 68 errors: last error was: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
 

You can only upload 750GB per day so you'll hit 403 errors if you try to copy more than that.

I don't think it's bumping up against that though, at least it seems unlikely. The original large transfer I've had running for a week, I used a rate of 8670Kb/s, which I had calculated not to push over the 750GB/day limit. I checked the data transfer rate of the VM that I'm running on, and there were large swathes of time that I wasn't even maintaining that in the last 24 hours. So I'm pretty sure I'm not pushing up against the daily transfer rate. Additionally, I've been able to immediately restart the transfers, so unless I got lucky and hit the daily reset in the last few hours, I don't think it's the transfer limit.

Largely, too, my question is more about what some of the switches are actually pertaining to with regard to the remotes. Particularly, is a 403 rate limit always the daily transfer quota? It doesn't seem like it right now, but that being the case, I'm not sure what limit I hit. I realize my post was kind of long to read through.

Thank you!

Google uses 403s for a few different things.

They are for when you hit the API too hard and rclone automatically paces that.
They are for when you download too much and you run out of download quota.
They are for when you upload too much and you run out of upload quota.

The small snippet of log you posted looks to be upload quota, but as the template has in it, a debug log would answer that question and usually why we request it as it eliminates guessing.

Rclone has pretty good general defaults for most uses cases so if you aren't sure, remove the flag and use the defaults as you are mixing a few things in your flags that go against each other.

If you want to post a debug log, the messages in there would shed light on the issue.

Great, thank you for the info on the 403, that helps a lot.
I've modified my switches based on what I read in some of the other threads, and it seems to be running ok. I can't make the problem happen at will so far/I'm not trying to trigger it either. I forgot to mention, the run that the log snip was running from had run for quite a long time (but at a pretty low average transfer rate due to the files being small).

Regardless, I'll continue tweaking and do a debug log next time, if I hit it again.
Thanks!

The last tidbit would be some 403s are fine as that just means you are pushing the API quotas you have. There is no harm/no foul for doing that as it just throttles you back a little.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.