Google Drive API still banned after 48 hours

Sure it works.

[felix@gemini ~]$ rclone size -vv GD: --fast-list
2019/05/23 09:46:28 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "size" "-vv" "GD:" "--fast-list"]
2019/05/23 09:46:28 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/05/23 09:46:29 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:46:29 DEBUG : pacer: Rate limited, increasing sleep to 1.222269795s
2019/05/23 09:46:29 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:46:29 DEBUG : pacer: Rate limited, increasing sleep to 2.14151141s
2019/05/23 09:46:29 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:46:29 DEBUG : pacer: Rate limited, increasing sleep to 4.440615196s
2019/05/23 09:46:29 DEBUG : pacer: Reducing sleep to 0s
Total objects: 266487
Total size: 63.565 TBytes (69890811208237 Bytes)
2019/05/23 09:58:08 DEBUG : 4 go routines active
2019/05/23 09:58:08 DEBUG : rclone: Version "v1.47.0" finishing with parameters ["rclone" "size" "-vv" "GD:" "--fast-list"]

403 rate limiting happens when you request too much per second.

Run your command share a log with -vv of the command and share the log.

The issue can be caused by your account getting more object over time as if you have fewer files, you wouldn't hit rate limiting.

It works for you.

It doesn't work for me.

I've already shared the log.

It hasn't produced the output after several minutes.

If I'm requesting too much per second I'm unsure how to reduce this other than --tpslimit which also doesn't produce an output either. What do I need to do?

There's no log with the full command running unfortunately as you shared a bit of it:

image

If you do the math, you can do 10 per second as that's the only documented limits from Google.

size command on my GD takes 13 minutes for me. If you limit the tps down, it would take 20-40 minutes I'd guess.

1 Like

It may take a while. Its an expensive command.

Thank you, that's useful information.

I'm aware the limits are 10/s. I've made 8947 total (2.49/s average) over the last hour. Over the last 45 minutes I've maxed out at 3.78/s.

This is much less than 10/s so why am I still being limited?

On average. :slight_smile:

According to Google's data I've maxed out at 3.78/s.

The other graph that averages per rolling 1 minute periods says a bit more but the sleep is about 16 seconds on the command that's still running.

On average. It is a sample.

So if I use --tpslimit 5 I should be fine?

Why do you care about the 403s? They are benign. I'd leave it at the default. But sure you can decrease the tps and it will just take longer.

Btw, mine took almost 20 minutes to come back. and I have higher per second limits than most people. ( I have more than double the number of objects as @Animosity022). So if you have a LOT of little files, expect to wait longer.

rclone size xxx: --fast-list -vv
2019/05/23 10:14:20 DEBUG : rclone: Version "v1.47.0-019-g3d475dc0-beta" starting with parameters ["rclone" "size" "xxx:" "--fast-list" "-vv"]
2019/05/23 10:14:20 DEBUG : Using config file from "/home/xxxx/.rclone.conf"
2019/05/23 10:14:20 DEBUG : xxx: Loaded invalid token from config file - ignoring
2019/05/23 10:14:21 DEBUG : xxx: Saved new token in config file
Total objects: 682719
Total size: 6.442 TBytes (7082535027913 Bytes)
2019/05/23 10:33:59 DEBUG : 19 go routines active
2019/05/23 10:33:59 DEBUG : rclone: Version "v1.47.0-019-g3d475dc0-beta" finishing with parameters ["rclone" "size" "xxx:" "--fast-list" "-vv"]

1 Like

Okay it took 40 minutes but it finished. The debug log is 2466 lines long.

This is with only 1242149 objects.

THat is about double my objects. and double the time. so sounds right.

So to summarize for everyone:

  • You aren't or weren't banned for any of this
  • You had rate limiting due to the # of objects and the size command
  • Those rate limits are benign errors and retry
  • ~250k objects take about 10 minutes
  • 1.2m objects takes about 40 minutes

If you'd want to continue to use the size command, the cache backend would be a great use for this as it would take a long first time but depending on the object change rate, it would keep a db list locally so you don't make all those calls. The caveat is that you can only use 1 cache backend per process going.

I've said multiple times I understand it's not a "ban".

But I understand why I used that word. You yourself have said:

You probably locked your account out, which is the 403 errors.

which sounds quite a lot like a ban.

And @ncw says:

There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

During testing, I experienced a lot of bans with the remotes in this order

I'm not sure what he means by "bans" here.

Thanks for the tip about caching. I'll look into it.

That's a post from 2017 when I used the wrong term.

That's from @remus.bunduc not @ncw so that's also an incorrect item and I can submit a pull request to fix it. He actually means he gets a download quota exceeded, which is a different error, which is also, not a ban.

403s come in a few ways.

  • Rate limits, which we covered.

Quota items

  • The daily upload limits for regular uploads and server side copies
  • The daily download limit for files

Banned means you lose access to your entire account like for having pirated content / etc.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.