When I enable extra verbose (-vv) I see
“low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)”
Happens constantly. Is there a setting to help with that? I’m using the latest beta. --tpslimit=1 doesn’t seem to have any impact. This happens across a number of completely separate accounts.
You should get your own API key.
It happens with or without my own key.
When you added your own key, did you reauthorize? What’s your command look like? How many checkers/transfers?
rclone -vv --progress --transfers=4 --fast-list --checkers=32 sync
That’s a pretty high number the g drive. The default is 8. Drive only allows 3 tps anyway. When you look in your Google API dashboard you see the traffic API hits?
Try the default checkers.
Also have you uploaded more than 750 GB in the past 24 hours? If so, you could be upload banned till tomorrow.
It’s not upload quota. I’ve barely uploaded to these accounts for a week when I noticed this happening. It happens on an account I almost never use.
Default checkers it happens as well. It happens with tpslimit=1, checkers=1, and transfers=1.
Yes, the dashboard shows ~10 2xx/s and 0 - 2 4xx/s in my test a few minutes ago.
edit: On the account with a separate client id it happens less often but still on occasion. With another account without a client id and everything set to 1 it still happens.
Hmm. That’s odd. Especially if you’re only seeing a few hits per your test on the API dashboard.
Sounds like a problem with your key. Wonder if it’s worth recreating the key under another account. With the new beta you shouldn’t really even need tpslimit unless you’re running multiple commands at once.
This all seemed to start maybe 2 weeks ago. Increased error rates. 5 different, unrelated accounts. I tried changing my IP address as that seems to be the only thing which is in common but that had no impact.
Dropping everything to 1 has seemed to stabilize the account with the client id. The others seem to have the same rate of issues. If I use all defaults the client id one exhibits the issue again.
My general script for nightly upload looks like:
/usr/bin/rclone move /data/local/ gcrypt: -P --checkers 3 --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs
With the standard google quotas, I try to limit to no more than 10 transactions per second so for me, this was a nice balance of speed / upload as I’m usually uploading media files. The defaults for rclone are a bit high for GD.
Are you doing a lot of small files as well or what’s the mix of the files that you are sending?