[User Rate Limit Exceeded] Is this error normal? Would it have any affect?

What's your quota page look like? Is that the only thing using the API?

Do you have 20,000 per 100 seconds on the quota page?

image


Yeah, rclone is the only thing using it.

And your rclone.conf is misisng, but I'd imagine you have the client ID / secret in there and you connected with that or do you have something else setup?

That's why we ask for the rclone.conf so we have all the info up front, but you deleted that from the template for some reason.

rclone.conf

[gdrive]
type = drive
client_id = xxxxxxxxxxxxxxxxxx
client_secret = xxxxxxxxxxxxxxxx
scope = drive
token = xxxxxxxxxxxxxxxxxx
root_folder_id = xxxxxxxxxx

[gcrypt]
type = crypt
remote = gdrive:/GDrive/crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxx

Yeah, I'm using my own client ID / secret

And you added that in when you configured it initially and not after?

Yes. When I first set it up. Before I had

--tpslimit 5 \
--tpslimit-burst 1 \

but I remove that today, after looking at your github and updating the

--drive-pacer-min-sleep 5ms \
--drive-pacer-burst 2000 \

and that's when I noticed the error. I also restarted systemctl rclone

Those are just saying you are hitting the API hard and to 'slow down'. Rclone handles that just fine, but depending on your setup, you may need to tweak them a bit.

My thought process is I want to hit the API as hard I can without triggering any slow downs as more slow downs cause delays so finding the sweet spot is key.

A burst of 1 would be awful as that gimps the API severely as the default is 100 based on the old value of transactions per second.

Try doubling and use 10ms / 1000 and see how that plays.

If the slow downs are very far and few inbetween, I would care, Your seems high though (imo) and would cause some slow downs:

That's waiting 2 seconds to try again which is a bit of time.

So, using tpslimit and tpslimit-burst is probably not a good idea in my case since it will severally limit the API, is that correct?

EDIT:
I have updated the

--drive-pacer-min-sleep 10ms \
--drive-pacer-burst 1000 \

and slowly increase it if I don't see any errors, is that correct thinking?

Edit 2:
I am still getting the same errors:

2021/08/12 16:41:13 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console:, userRateLimitExceeded)
2021/08/12 16:41:13 DEBUG : pacer: Rate limited, increasing sleep to 1.935994549s
2021/08/12 16:41:13 DEBUG : pacer: Reducing sleep to 0s

I played around --drive-pacer-min-sleep and --drive-pacer-burst and I still get the same errors. I re-added
--tpslimit 5 and the errors went away. So I'm panning on playing with the values a little to see what happens. Do you know the default value?

It really does not feel like the client ID/secret are matching up.

The defaults are here:

https://rclone.org/drive/#drive-pacer-min-sleep

24 hours on a chart gets me:

So I get some errors but I do tweak it pretty high so I'm not too worried.

Here is mine:


Note, I've been playing with it for like half day so numbers are a little higher than usual.

Here is the past hour with the same flags as on your github with the addition of

--tpslimit 8

I think there is a 10 transactions per second rate limit as I start getting errors when I set tpslimit to 10. Are queries and transactions per second different? Because in theory you would have 200 queries per seconds which means 200 transactions per second?

How do I test/check this?

A query/API hit/transactions are really all the same terms.

Google doesn't document the per second limit but only the per 100 seconds.

Setting the tps limit really make the other parameters useless since it shuts down all the ability to burst and whatnot.

Your numbers are really, really high for that period of time.

@VBB - I wonder what your numbers look like? I am really not a big user so it's hard for me to push numbers that high.

The other thing that I think might be causing you an issue is the small chunk size. I did play around with that for scans and it seemed helpful, but if you remove that, I think the API hits will reduce as well. The downside is there is a bit more bandwidth wasted, but for me, I really don't care. (It's not a huge amount of waste, but some).

1 Like

I've never looked at these before, but here's my last 24 hours. This is without running any scans, as I haven't been able to run Plex in three days :sob: I have no idea why that error rate is so high, but I haven't noticed anything unusual:

Here is the last 30 days, with a Plex scan once a day:

For reference, the media being scanned is roughly 800TB.

EDIT: Now that I think about it, these errors are mostly drive.files.create, which makes sense, since my normal scan mount is read-only. My mount to make changes is read/write, but without any sort of caching, so I get rclone errors when creating nfo files, for example. Those are warnings only, though, and the files are created just fine.

1 Like

For completeness:

Most of the month I used --drive-pacer-min-sleep 10ms --drive-pacer-burst 1000, and a few days ago I switched to --drive-pacer-min-sleep 5ms --drive-pacer-burst 2000.

Also, since I did a lot of scanning in new libraries lately, i used --vfs-read-chunk-size 1M for the entire time.

My mount command:

rclone mount --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 2000 --drive-pacer-min-sleep 5ms --poll-interval 0 --rc --read-only --user-agent ******* --vfs-read-chunk-size 1M -v

1 Like

Yep, you are pushing quite a lot more traffic than I. I'd say the limits are working fairly well in your setup. I'm still puzzled at the OP as we can see the graphs so that makes me feel the client/secret are there, the numbers being rate limited as just so low..

Oh, for sure. Looking at the metrics, I don't see anything alarming, though. Seems to work well within G's API limits :slight_smile:

So, what I'm understanding, using @Animosity022 config from github it's normal to have some errors. Especially when starting rclone, as "drive.files.list" is used a lot.

Having errors in "drive.files.list" won't skip listing/showing the files in the mount, right? So, Plex/Jellyfin will still pick up the files even though it shows an error?

Also, I'm way under 800TB so I don't think it's a file limit.

No, the pacer errors just retry so no harm really but makes things a little slower.

1 Like

Thank you @Animosity022 @VBB I will see how things pan out in the next few days, as I reset up Jellyfin.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.