Hitting Google Workspace download limit daily trying to scan in Plex

What is the problem you are having with rclone?

For the last two days, while trying to scan my Google Workspace based media library into my Plex server, I've hit some hidden API quota and had all of my files locked as "Download quota exceeded" across all my team drives and am trying to figure out why this is happening.

According to my quotas page on my client project, I am coming nowhere near the max queries per 100 seconds and am using 0% of my total daily queries so that doesn't seem like the issue.

The files will open and play after the API ban expires just fine up until about ~20m into scanning where everything just shits out again. The same thing happened yesterday so some files did scan in and I was able to play the files that had scanned in perfectly fine before starting the scan back up again to finish it and had this issue reappear for the second day in a row.

I've taken all the rclone+Plex precautions like disabling analysis in Sonarr and Radarr and disabling things like video preview thumbnail generation and intro detection on a per-library basis.

I have "Scan my library" unchecked, "Run a partial scan when changes are detected" checked, "Scan my library periodically" set to daily, "Empty trash automatically" unchecked, and "Generate video preview thumbnails", "Generate intro video markers", "Generate chapter thumbnails", "Analyze audio tracks for loudness", and "Analyze audio tracks for sonic features" all set to never.

I also have "Update all libraries during maintenance", "Upgrade media analysis during maintenance", and "Perform extensive media analysis during maintenance" all unchecked.

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.11.0-44-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

No command in specific, was just trying to scan in files from Plex via my rclone mount.

The rclone config contents with secrets removed.

My rclone config:

Custom client ID and password pair, one team drive per character of the alphabet, merged together via union at my mountpoint.

My systemd mount script, pretty much a copy and paste of the popular one from here with some commands tweaked to suit my drive storage for cache (on a side note, is a cache necessary if you're just streaming via Plex?):

A log from the command with the -vv flag

No command. I do have a debug log but it's 14GB big and I have very bad peering with my server provider so it'll take a bit to chop down to a consumable size and pull the logs but it should have something pertinent to this.

The api quotas aren’t useful as you are not hitting any API limits.

Google has daily download quotas that aren’t published and it’s a bit of guess work.

How much are you downloading? Edu account or shared with anyone else?

How big is the library?

That seems not too bad but not much to do other than wait for a reset unfortunately.

Not sure a log would add much either. As long as you are sure of the version, that all looks good.

Using the new Plex agents as well for the libraries as well?

I moved away from Google to avoid the undocumented limits and no help from their support.

You could try throttling yourself a bit while you scan and see if that helps.

Perhaps try to limit the per file:

--bwlimit-file 5M

Or something along those lines.

I'm not aware of any way to throttle Plex as it just kind of does its thing.

Do you have the "Perform extensive media analysis during maintenance” checked? (settings-> scheduled tasks). If so uncheck it.

Same goes for “Generate intro video markers” (library tasks)

I think you replied to me when you wanted the OP.

In the first post, the OP confirmed those are unchecked.

My aplogies, cant believe iI missed that. :frowning:

@Snaacky
Do you by any chance have some indexing running on ur system ( by the OS) ? And have you moved the plex cache to a local disk / verified that your union mount is correctly using a local disk for writes.

And do you use the same workspace account for uploading ? I would also just use 1 team drive until second is needed. You can keep the same structure with union. Minimize your api calls as much as you can. Im running my setup with pretty much the same settings as @Animosity022 without hitting any limits on a 700 tb lib (jellyfin, mergerfs, g workspace). - somehow

edit: Do you run plex in docker / podman by any chance? If so try to mount the volume as read only.
edit2: Sonarr and Radarr is setup to store the metadata on local disk right ? (limits are based on interval, so might be that something else has used up most of your quota, and ur scan using up whats left)

I would disable this, as it doesn't do what you want it to. If anything, this will trigger a full scan when changes are detected (if they are detected at all).

This is OK to leave checked, but since you're scanning everything in on a brand new server, it won't be particularly helpful.

Why??? :wink: My guess is you're planning ahead in case you ever hit the file limit. Why use team drives at all, though?

It's not necessary, but it might help, if you have the space for it (I don't on my remote server).

This makes me wonder why you picked that provider to begin with. Bad peering results in higher latency, which is the number one issue for users of remote hosts (in my opinion). Have you pinged your server to see how high it is on average? My latency these days is mostly in the mid 70s, which is not bad, considering the distance, but once it goes above 100, streaming is usually in trouble.

Same here, but the difference is that OP is setting up a brand new Plex server and scanning in media for the first time. The last time I did this was in mid-2017 when there was no upload/download quota (the good old days of GSuite). Having said that, I don't see why you would run into these kinds of issues now, because what does a fresh media scan-in do any differently than, say, a metadata refresh combined with "Upgrade media analysis during maintenance" (something I do nightly)? Especially with new new agent/scanner this shouldn't be a problem. Google works in mysterious ways, though. Wonder if there would be a difference if OP did this on a regular drive instead of team drives...

To the best of my knowledge, the quotas are per user so more or less team drives shouldn't matter.

I never used shared/team drives myself though.

I spent a few days on a support ticket with Google and they basically will refuse to tell you what you tripped and only say the canned responses.

I thought so, too. Never used anything other than a regular drive either. Thanks to you and others here, I never had to contact the big G :stuck_out_tongue:

I only did contact them once I was migrating as I was trying to figure out what limit I tripped and they won't tell you a thing other than wait 24 hours and try again, which is baffling.

Same here, but the difference is that OP is setting up a brand new Plex server and scanning in media for the first time.

That does include the initial scan. I just fired up a new podman instance to test if it can complete without hitting any limit. So far so good. However, I'm using jellyfin and not Plex.

@Animosity022

To the best of my knowledge, the quotas are per user so more or less team drives shouldn't matter.

26+ remotes being called at once by same user (depending on the union settings / fs calls) ?

@Snaacky
Could always just import one drive at a time also

It’s not about the API though as the more remotes just make more API calls and that is not the issue.

It’s about the downloads.

It’s not about the API though as the more remotes just make more API calls and that is not the issue.
It’s about the downloads.

Your most likely correct. However, Google has a tendency of being overly ingenious in their quotas. I would eliminate as much as I could to hunt down the bug for the sake of troubleshooting. - But prob other steps that I would check first.

@Snaacky

I used to until one day I came back and my data folder had wiped itself. Now I keep Plex and rclone on the host OS and Dockerize everything else like *arr services which should be storing data in their respective container data folders.

Sounds like soe misconfiguration with your volume mappings. For the sake of troubleshooting, could you paste your previous plex config (docker). One benefit with dockerizing it is the easier mapping with paths. I would test to fire up a scan with read only permission to see if you trigger the same limit

Could you post your mergerfs config also ?

. I don't really understand the use of mergerfs in this setup. Perhaps you could explain how it fits into this?

Its the same as what you do with the union with rclone (which is inspired by mergerfs).

whith mergerfs u can take:

  • remote A
  • remote B
  • ...
  • remote Z
  • nvme disk

And present it all all at a mountpoint such as f. ex: /srv/media (which u then mount in docker / plex). The important part here is that you configure it so that your nvme disk is first for writes and such. (prioritize from fastest to slowest, not other way around). -But I forgot you used rclone union, so dont bother with this.

Anywho, you could also just add "--read only" to your rclone config and fire up a scan to see if you get the same problem. -or test with docker.

Gotta quote myself here, as I remembered wrong. I did create several new libraries not too long ago, and they scanned in without a hitch. Took a while, but didn't hit any limits. About 40,000 media files, if I had to guess. I realize this isn't helping the OP, but I thought I'd let you know.

One last thing for you to try is this flag:

--vfs-read-chunk-size 1M

I've been using this ever since I scanned in my new libraries last year, mainly because I also let Plex analyze my media overnight as a scheduled task ("Upgrade media analysis during maintenance").

So what that does is limit the range request and makes waste a bit less per file. With no definitive rules from Google, it’s a bit of a guessing game.

Basically if you read a few bytes of a file with a large range requests, it downloads bit of extra data each time. Multiply that out for many small files, it’ll add up.

In Plex when you analyze a file, it opens and closes the file about 3 times and reads the media info or ffprobw details on the file.

Google seems to get picky if you the same file a bunch along with lots of reads. That’s just my hunch from what I’ve seen but I’ve had a hard time proving it through testing.

Edu accounts / team drives seem to act different from a regular GSuite personal drive.

Glad it seems to be working for you :slight_smile:

I've been using this flag for my streaming mount for about six months now without any issues. To add to what @Animosity022 just said, here's the documentation: rclone mount

"This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests."

So, going down to 1MB from the default of 128MB seems to make a difference. Once your initial scan is done, you can remove the flag to go back to default.

Hope it keeps working for you.