Google Drive - Rate Limit Exceeded with Own API Key

What is the problem you are having with rclone?

Using 'rclone sync' to synchronize files from a folder that's been shared with me to another folder that I'm the owner of. I followed the documentation on how to create my own OAuth key, as well as reviewed several forum posts. I'm using G Suite for Business as well as Google Cloud Platform. I created a new project in Google Cloud and generated a new OAuth Client ID. I also created an API Key. When I ran 'rclone config', I was prompted to enter the OAuth Client ID and Secret, but never asked to provide the API Key. I even made sure I went through the Advanced settings wizard.

I checked the Quotas page and see I'm alotted 1,000,000,000 Queries Per Day, 1,000 Queries per 100 seconds per user, and 10,000 Queries per second. According to the log of queries, I have yet to exceed 14,258 Queries Per Day and 40.31 Queries Per 100 Seconds.

Is it possible the rate-limiting error is coming from the Google Drive account for the owner of the source folder? Is there any way for me to tell if that's the case?

What is your rclone version (output from rclone version)

Version 1.51

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu Linux 18.04.3 - 64-bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vv sync My-GoogleDrive:/Folder1 W4JEW-GoogleDrive:/Folder2 --progress

Folder1 is a folder of files that were shared with me where I selected 'Add to My Drive'

Folder2 is a new empty folder that I created to store the contents of synced files

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

2019-11-16 23:40:49 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=38495540619, userRateLimitExceeded)

Just one other point to add to my original post, I checked the total size of the source folder. There are 2011 objects and the total size of the folder is 1.255 Terrabytes.

I heard there's a limit of 750 GB of data transferred per day. I just checked the size of the destination folder at the point where the 403 rate-limit exceeded error started and I see it's Total objects: 1434 Total size: 749.901 GBytes (805200568939 Bytes). It seems to confirm I was told.

Those are normal errors if you are pushing the API and nothing to worry about.

You can use --fast-list as well as that should speed things up.

Occasional 403's is the API complaining about being hit a little too fast with too many requests. Rclone's drive-pacer will pick up on that and throttle down slightly to keep under the limit. You will see some of these during normal operation and it is not a problem as rclone will just retry any refused requests.

403 can also mean you hit your data-quota though. That's 750GB/day up, or 10TB/day down.
When this happens you can't do much aside from waiting overnight for it to reset. Capping your upload doesn't affect your download though. It seems like this is the limit you hit here.

Yep - that's exactly what I think happened here. The fact that the destination folder was right at ~749 GB leads me to believe that the 750 GB limit is the culprit.

The telemetry on the Google Cloud API side shows everything else is well within the limits.

Given that this is the largest folder I need to clone, I'm going to batch the copies and do half of the folders one day, then the other half the next and call it quits.

I'm so glad I found rclone and was able to cancel my subscription to MultCloud!

Thanks for getting back to me!

403 are because you hit the limits, maybe you have transfers more 100 GB cloud to cloud or upload 750 GB.

You may also find that the --max-transfer flag is useful to limit the uploaded amount in a given operation.

For example --max-upload 500G would terminate at 500,GB and leave you 250GB for any other daily needs. It can be a nice way to schedule a large amount of data to be moved once pr day without interfering with your normal usage or needing manual attention (can just schedule it as a 1/day task in cron or windows task scheduler).

I believe the cloud-to-cloud limit (on Goolge anyway) was based on a misunderstanding of a bug in rclone. When this got fixed/improved a few versions back I have not have any issues transferring the whole 750GB/day server-side. It does share the same quota as regular uploads though.

1 Like

Or because you didn't use fast-list and requested too many API hits per period. Since the OP posted a single line, it could be that as well. Would need to see more lines to validate.

Yes. Today i copied 430 GB, cloud to cloud on gdrive. With 0 error, im using rclone 1.50.

Yep - as long as I'm not trying to copy more than 750 GB within the course of one day, I don't see any errors.

As mentioned earlier, it looks like the reason I'm running into this issue is because the folder I'm trying to run 'rclone sync' on is 1.2 TB.

To work around this, I ran 'rclone lsd My-GDrive:/Folder1' to get a top level list of subdirectories, then I'm going to create a couple of jobs that sync the content over the course of 2 days. This is the only folder that I need to sync that's in excess of 750 GB, so I think this should work nicely.

I like the fact that there's a max upload option, but the content within the folder doesn't seem to get synced in any particular order, so there's not really a good way for me to know what was/was not synced to the point that I know where to pick up on the second day.

For this reason, I'm just going to create the two jobs so I know where I left off between day 1 and day 2.

Thanks for the advice everyone! Very supportive group! I love it!

You can do it that way for sure, but why worry about divvying up the data into logical units under 750GB when you can just use --max-transfer set to some reasonable daily limit like I suggested? Then it is pretty trivial to automate the whole thing with a daily script.

And yea, the community is awesome. NCW is also one of the nicest and most helpful devs I've had the pleasure to interact with.

Well, being new to rclone, the one question I would have there, does rclone keep track of what's already copied vs. what is remaining?

It doesn't look like rclone processes the sub-directories and files in any particular order, so my concern is that it wouldn't pick up where it left off.

If rclone does keep track of that, then absolutely - that's a great way to go!

Yes and no.
Rclone does not keep track of what has been uploaded in the sense of it saving the information somewhere - however, whenever you try to copy or sync rclone will first list the files (and get their names, locations, size and modtime). It then uses this information to quickly decide which files need to get transferred or updated, which ones only need a new timestamp, and which ones can be skipped entirely because they are already identical.

If you abort a transfer this check needs to happen again - but this is typically takes a trivial amount of time compared to the actual transfers, so in most cases you only really lose a few seconds of efficiency. ( --fast-list will also make checks on very large collections of folders much quicker). Any half-finished files will also need to be redone from the beginning, but unless you upload gargantuan files this is rarely much of an issue to think about. If you use the --progress flag to see the status of the ongoing transfer (which is generally a good idea) you can see these already-uploaded files being counted as "checks" rather than transfers. This just means rclone did not need to reuload them in order to make them identical to the file you tried to copy.

So in short - rclone will handle this for you, and probably better than you can organize manually TBH :slight_smile: Feel completely free to just start transferring the entire collection, abort it at any time and simply resume later. There is little no no penalty for doing this. Or as I said, let rclone auto-abort on for example 600G and run that operation once a day until you are done.

Do note that --max-transfer only considers the limit pr operation though, so if you stop/restart that it will end up transfering more than you set the limit for. If needed to you can set a --bwlimit to cap the speed of this transfer to not interfere with your other internet-use during the day.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.