Questions & Improvements for PMS Automation

Hey Chaps!

Been 7 months since I was last here. That's 7 months of stuff running like clock work. I recently noticed that my video collection wasn't being updated, and deleted as it should be. I did a manual command run, and within 5 minutes I got "Error 403: User Rate Limit Exceeded". Cool, Which limit did I hit? There's 3 in total;

Quota Name Limit
Queries per day 1,000,000,000
Queries per 100 seconds per user 1,000
Queries per 100 seconds 10,000

Radarr/Sonarr download metadata, however I'm fairly certain plex handles that on it's down, so I've got to ask, should I keep the metadata or get rid of it? If I wanted to exclude multiple extensions, such as .nfo/.jpg/.png, how would I structure the arguments?

My next question is; has someone gotten a script setup to use multiple workers to upload with?

Here's the command I'm using;
rclone/rclone copy --transfers=4 --checkers=20 -v /media/sdz1/dreadstarx/datafiles/radarr GSuite:/Movies

The API quotas you listed are not related the the limits you see.

You can only upload 750GB per day which is why you see that 403 error. Google Drive also limits you to 2-3 file creations per second so having a lot of transfers usually slows things down. I just leave my checkers and transfers at the default.

Not sure what you mean by multiple workers or what problem that solves or what you are trying to do. If you can expand on that, I can probably answer it.

If you look at PGBlitz, they've got a nifty setup in there to where you can have "workers" that effectively upload 750GB per worker, I think a max of 10 workers. These aren't actual accounts, which allows you to upload more. Haven't been able to figure out how they did it.

As for the creation of more than 3/s. I'll just exclude all jpg/png/nfo files, and have Sonarr/Radarr not download them. That's the easiest solution, I believe. Sorry for the late response, working like a mad man.

We can't really give help and advice on how to purposefully break Google's user-limits. Not only would that bring negative attention to the rclone probject, but you may be putting your account at risk as well. I haven't read the TOS in detail, but I would be very surprised if there wasn't a clause about such intentional misuse.

If Google wanted to allow you 10x higher upload quota then they would just set that limit higher. This isn't some technical limitation to overcome, but a policy/TOS limitation. Out of respect to the rclone author and the community here I can't in good conscience discuss it here, nor recommend that you attempt this for your own safety.

Some additional notes:

  • Occasional 403 errors can occur when just hitting the temporary API limits too - but rclone will deal gracefully with this and it won't cause stalling. Unfortunately these two limits share an error code so you have to determine this from the behavior you see.
  • non-upload operations should keep working even if you hit the daily upload quota.
  • The upload quota should reset every 24hrs, although it appears that the exact dime of day this happens can vary somewhat depending on what server you are talking to. (it may be midnight, but midnight in different timezones - it's not entirely clear).

--bwlimit (to limit the upload speed)
--max-transfer (to cap the maximum data for a single rclone instance)
May be useful tools to keep your quota under control and avoid it being maxed by bulk tasks and thus blocking your less intensive regular day-to-day use.


Honestly, I don't know if it's breaking Google's TOS or not. The only place I've seen this done was with PGBlitz. I'd rather not break the ToS. I was planning on adding another account or two to my G Suite to get more upload.

As for that limit, I did a bit more testing. I tested my archive script I run, which is a single transfer (used for anything bigger than 20GB chunks) to my G Suite, I had no problems. I used the same exact script but increased the amount of files to 2, and no problems. The second I started with smaller files, anything less than 5GB, I'd get the 403 error. Looks like Animosity's original comment about file creation is the problem I'm having. My TVShows don't have all this extra metadata (backgrounds, snapshots, etc), while movies do.

I'll have to figure out more about the sub-accounts. I wouldn't think it would be possible without Google allowing it. I'll dig into it more, but won't post about it here. I guess I'll be reconfiguring my rClone to work with Team stuff. Otherwise files will be separate, correct?

It shouldn't make sense that you get problems on less than 5GB. You might spike into an intermittent 403 (which rclone adjusts from) if the files are very small and/or you use many transfers, but certainly these need to be a lot smaller for this to be an issue at all. As said Gdrive can make 2-3 file-accesses pr second, so it only really becomes a limitation when your files are small enough that you transfer more than that much. That obviously isn't the case for files several GB large, or even just 100MB most likely, even on a very fast connection. And in any case rclone should gracefully deal with it and not spam the API for more than it is willing to give.

If you are saying that >5GB files completely stall (not just an intermittent 403 in the log) but larger files work then this has nothing to do with this limitation. We would need to see a -vv log to figure out what is happening. If you hit the upload quota then nothing will upload no matter the size.

Yes, teamdrives and Gdrives use slightly different configurations (different options but same backend). Each drive needs it's own remote. It's not currently possible to access them all via the same remote.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.