UserRate limit exceeded

I’m getting these errors and I’m confused if these are the normal userratelimitExceeded messages or something else. I know that 403s are normal but haven’t seen the error message explaination before.

pacer: Rate limited, sleeping for 1.550379682s (1 consecutive low level retries)
2018/07/23 10:46:25 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console:

This is on a server with a vfs chunked mount, tpslimit 5 with tpslimit-burst 5 should be well within the per user and global user API quota limit (10 ps per users and 100 ps global)

Can I just ignore these errors or is there something wrong?

It generally means it was requesting too fast. A few 403s really isn’t much to worry about as rclone with retry them later.

What’s your mount out of curiosity and are able to tell how many streams you had going when that happened?

Also, what does your API hits look like when that happens?

Yes I would think it was the exponential back off of rclone, just not sure because of the extra text. I remember before that it only showed the first part without the explanation.

It was during a scan of new media (plex autoscan) every now and then it would give this error. Errors show in the API overview, around 2% per VPS (5)

Using plex autoscan, I’d guess you’d be pretty light.

If you have 6 servers though, how many streams are you serving as I would think that’s more the culprit than the scan. The scans are pretty cheap API wise.

I’m getting a lot of 403’s today, or I should say, I used googledrive a lot less today than I have in the past and I still seem to have gotten a lot of 403’s. It started back up again after an hour or two of being dead, so I guess I wasn’t banned or anything maybe they were just busy an hour or two ago, or whenever it was exactly, I was semi-afk and didn’t notice.

One thing you could always try is bumping up the chunk-size, I did some tests with a 256M chunk size and for playing, you are ‘mostly’ going to sequentially read files anyway.

That may reduce the API hits as you are grabbing them in big chunks and just reading.

It’s not the streams, really something to do with scans. Tested it the last 2 days and yesterday had a ban again. All test servers were in debug and were accessing all files (reading chunks and repeating 5 times per file) mind that those files didn’t change and weren’t new.

i’ll settle on cache for now, even its not that stable compared to VFS.

So that’s strange.

What’s your library size / analyze show if you run this on the systems that are scanning?

21510 files in library
0 files missing analyzation info

I just gave a clip of mine, which is the output at the end.

29.07.2018 12:55:29 PLEX LIBRARY STATS
Media items in Libraries
Library = Movies
Items = 1876

Library = TV Shows
Items = 9730

Time to watch
Library = Movies
Minutes = 211200
Hours = 3520
Days = 146

Library = TV Shows
Minutes = 390324
Hours = 6505
Days = 271

32561 files in library
165 files missing analyzation info
0 media_parts marked as deleted
0 metadata_items marked as deleted
0 directories marked as deleted
11552 files missing deep analyzation info.

Nothing special as others for bigger libraries.

Is there part of the library that is cut off? You’ve got 1876 Movies + 9730 shows but 32561 files in library. My numbers all add up for the most part.

Do you have music or something else going on?165 missing analysis isn’t bad but the 32k total and only 11k with deep analysis. Do you have that the deep analysis task on?

No deep analysis active. I think the extra files are all the srt subtitles that reside next to the media file.