If both your requests and errors are zero, are you sure it's actually using your client id? I doubt it. You need to reauthorize when you add it. Try to reauthorize and try again.
If the goal is to run size, you'd be much better off using the cache backend for this or just checking on a mount.
I can reproduce rate limits with the size command the same way you are using it but I'm not 'bannned'.
2019/05/23 09:42:47 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "size" "-vv" "GD:"]
2019/05/23 09:42:47 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/05/23 09:42:48 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:42:48 DEBUG : pacer: Rate limited, increasing sleep to 1.772446532s
2019/05/23 09:42:48 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:42:48 DEBUG : pacer: Rate limited, increasing sleep to 2.510052506s
2019/05/23 09:42:48 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2019/05/23 09:42:48 DEBUG : pacer: Rate limited, increasing sleep to 4.319138048s
[turkeyphant@fileserver ~]$ rclone -vv size remote:/ --fast-list
2019/05/23 14:47:20 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "-vv" "size" "remote:/" "--fast-list"]
2019/05/23 14:47:20 DEBUG : Using config file from "/scripts/rclone/rclone.conf"
2019/05/23 14:47:22 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXXXXXXXXX, userRateLimitExceeded)
2019/05/23 14:47:22 DEBUG : pacer: Rate limited, increasing sleep to 1.576605145s
2019/05/23 14:47:22 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXXXXXXXXX, userRateLimitExceeded)
2019/05/23 14:47:22 DEBUG : pacer: Rate limited, increasing sleep to 2.195010837s
2019/05/23 14:47:22 DEBUG : pacer: Reducing sleep to 0s
That's why I've stopped it completely.
I will be running it once every couple of hours maybe but it won't run at all at the moment:
This is showing spikes with drive.files.list even though it's only giving me 403s.
I'm sure it finishes, it just depends on how many directories/files you have in your GD. My crypt has 26k objects and 927 directories so listing that out is a lot of API hits.
[felix@gemini GD]$ find . -type d | wc -l
927
[felix@gemini GD]$ find . -type f | wc -l
26226
If I try to list out my entire GD, that is quite a different animal as there are quite the number of directories and files as I'm at 10 minutes so far and still going.
You are making too many requests per second which is causing the rate limits. You can only make 10 per second at most per user and running size is super heavy API command to run.
How do you avoid that? Don't run it. Why are you running a size command every 15 minutes?
Obviously because I wanted to check the size (and didn't realise it was so expensive).
I'm not going to run that script again. I've said it's off. It's been off for two days. It's going to remain off.
It's utterly irrelevant to my question so I'd appreciate it if you could stick to the relevant data.
To remind you:
Why am I'm still get 403: User Rate Limit Exceeded with a private client id (assume crypts use client ID of encrypted remote) and how can I avoid this when running size commands?
You have a command that you were running generating 403 rate limits and you asked why you were getting rate limits and the command causing the rate limits is irrelevant? Ok.
I asked because I wanted to see if there was an alternate solution.
You can run the size commands and let the 403s retry as they are really benign errors.
You can ask Google to increase your per user quota.
You can limit your tps per second to per under 10 per second. You'd need to calculate your overall usage per second and build some buffer to reduce them completely.
The fact I was running this command frequently in the past has no bearing whatsoever on my question which is about running it once now.
It does not matter why I was running it in the past (although I can't think of any other reason to run size other than to get the size).
I fully understand a single run can cause rate limit issues (even though it did not previously).
I have not run the command for two days now so the fact I used to run it frequently has no bearing.
What I want to know is how to run it in general. I tried to run it today and got 403 rate limits. They do not resolve even after >30 minutes.
You have said there is no "24 ban" for exceeding rate limits in a given period.
You have said it will retry but that's not working.
Google will not increase my quota.
I'm now trying rclone size remote:/ --fast-list --tpslimit 0.5 with no success so far.
The question is simply how to use size when running the command doesn't seem to work due to 403s.