Google Drive API still banned after 48 hours

My daily upload is <80 GB/day (as fast as my connection is) and my API requests have been close to 0 for the last couple of days.

Yet I'm still get 403: User Rate Limit Exceeded with a private client id (assume crypts use client ID of encrypted remote).

If both your requests and errors are zero, are you sure it's actually using your client id? I doubt it. You need to reauthorize when you add it. Try to reauthorize and try again.

They aren't zero, they are close to zero.

What changed from your traffic Monday/Tuesday to Wed/Thur as it's almost nil. It doesn't seem it's really being used.

What does your command look like? I see spikes of 6/s which is pretty high. That will hit your quota.

I stopped the script that checks the size every 15 minutes.

There's still a single copy task to a crypt remote running at about 10 Mbps.

rclone -vv- size remote:/

I have refreshed the client ID and still get the same issue.

You have a Debug log?

That's an expensive operation to run every 15 minutes.

I think you are using the word banned wrong as well as it does create confusion.

403 are rate limits that happen when you do too much of something.

Quotas are hit when you upload too much in a 24 hour period.

Neither of those 'ban' you.

Why are you running size every 15 minutes? size is a super expensive API call as it walks your entire GD.

You can use rclone about to give size.

[felix@gemini ~]$ rclone about GD:
Used:    63.566T
Trashed: 976.726M
Other:   129.377M

If the goal is to run size, you'd be much better off using the cache backend for this or just checking on a mount.

I can reproduce rate limits with the size command the same way you are using it but I'm not 'bannned'.

2019/05/23 09:42:47 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "size" "-vv" "GD:"]
2019/05/23 09:42:47 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/05/23 09:42:48 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:42:48 DEBUG : pacer: Rate limited, increasing sleep to 1.772446532s
2019/05/23 09:42:48 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:42:48 DEBUG : pacer: Rate limited, increasing sleep to 2.510052506s
2019/05/23 09:42:48 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:42:48 DEBUG : pacer: Rate limited, increasing sleep to 4.319138048s

Debug log (it stalled here for a long time):

[turkeyphant@fileserver ~]$ rclone -vv size remote:/ --fast-list
2019/05/23 14:47:20 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "-vv" "size" "remote:/" "--fast-list"]
2019/05/23 14:47:20 DEBUG : Using config file from "/scripts/rclone/rclone.conf"
2019/05/23 14:47:22 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXXXXXXXXX, userRateLimitExceeded)
2019/05/23 14:47:22 DEBUG : pacer: Rate limited, increasing sleep to 1.576605145s
2019/05/23 14:47:22 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXXXXXXXXX, userRateLimitExceeded)
2019/05/23 14:47:22 DEBUG : pacer: Rate limited, increasing sleep to 2.195010837s
2019/05/23 14:47:22 DEBUG : pacer: Reducing sleep to 0s

That's why I've stopped it completely.

I will be running it once every couple of hours maybe but it won't run at all at the moment:

image

This is showing spikes with drive.files.list even though it's only giving me 403s.

If your goal is to run size, you should use fast-list as that helps reduce the API hits:

[felix@gemini ~]$ rclone size -vv gcrypt: --fast-list
2019/05/23 09:45:34 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "size" "-vv" "gcrypt:" "--fast-list"]
2019/05/23 09:45:34 DEBUG : Using config file from "/opt/rclone/rclone.conf"
Total objects: 26226
Total size: 62.936 TBytes (69198839790147 Bytes)
2019/05/23 09:45:47 DEBUG : 20 go routines active
2019/05/23 09:45:47 DEBUG : rclone: Version "v1.47.0" finishing with parameters ["rclone" "size" "-vv" "gcrypt:" "--fast-list"]
[felix@gemini ~]$

That's the terminology I've seen used. "Ban" for both the 750GB/24hour limit and the 403 API request limit.

I'm not seeing any 429.

I will change to --fast-list but I can't even get that to run either.

403s are rate limits and not bans.

As I said, I'm just repeating the terminology I've seen elsewhere.

How can I be being rate limited when I'm making almost no requests?

I'm sure it finishes, it just depends on how many directories/files you have in your GD. My crypt has 26k objects and 927 directories so listing that out is a lot of API hits.

[felix@gemini GD]$ find . -type d | wc -l
927
[felix@gemini GD]$ find . -type f | wc -l
26226

If I try to list out my entire GD, that is quite a different animal as there are quite the number of directories and files as I'm at 10 minutes so far and still going.

You are making too many requests per second which is causing the rate limits. You can only make 10 per second at most per user and running size is super heavy API command to run.

How do you avoid that? Don't run it. Why are you running a size command every 15 minutes?

As I have now said three times, I'm not.

I do want to be able to run it from time to time though.

My question is how can I run this command now. At the moment I can't run it even every two days.

You said you turned it off. I asked why you were running it to being with, which you have not answered.

Obviously because I wanted to check the size (and didn't realise it was so expensive).

I'm not going to run that script again. I've said it's off. It's been off for two days. It's going to remain off.

It's utterly irrelevant to my question so I'd appreciate it if you could stick to the relevant data.

To remind you:

Why am I'm still get 403: User Rate Limit Exceeded with a private client id (assume crypts use client ID of encrypted remote) and how can I avoid this when running size commands?

You have a command that you were running generating 403 rate limits and you asked why you were getting rate limits and the command causing the rate limits is irrelevant? Ok.

I asked because I wanted to see if there was an alternate solution.

  • You can run the size commands and let the 403s retry as they are really benign errors.
  • You can ask Google to increase your per user quota.
  • You can limit your tps per second to per under 10 per second. You'd need to calculate your overall usage per second and build some buffer to reduce them completely.

I'm not sure how I can be more clear.

The fact I was running this command frequently in the past has no bearing whatsoever on my question which is about running it once now.

It does not matter why I was running it in the past (although I can't think of any other reason to run size other than to get the size).

I fully understand a single run can cause rate limit issues (even though it did not previously).

I have not run the command for two days now so the fact I used to run it frequently has no bearing.

What I want to know is how to run it in general. I tried to run it today and got 403 rate limits. They do not resolve even after >30 minutes.

You have said there is no "24 ban" for exceeding rate limits in a given period.

You have said it will retry but that's not working.
Google will not increase my quota.
I'm now trying rclone size remote:/ --fast-list --tpslimit 0.5 with no success so far.

The question is simply how to use size when running the command doesn't seem to work due to 403s.