Hitting Google Workspace download limit daily trying to scan in Plex

What is the problem you are having with rclone?

For the last two days, while trying to scan my Google Workspace based media library into my Plex server, I've hit some hidden API quota and had all of my files locked as "Download quota exceeded" across all my team drives and am trying to figure out why this is happening.

According to my quotas page on my client project, I am coming nowhere near the max queries per 100 seconds and am using 0% of my total daily queries so that doesn't seem like the issue.

The files will open and play after the API ban expires just fine up until about ~20m into scanning where everything just shits out again. The same thing happened yesterday so some files did scan in and I was able to play the files that had scanned in perfectly fine before starting the scan back up again to finish it and had this issue reappear for the second day in a row.

I've taken all the rclone+Plex precautions like disabling analysis in Sonarr and Radarr and disabling things like video preview thumbnail generation and intro detection on a per-library basis.

I have "Scan my library" unchecked, "Run a partial scan when changes are detected" checked, "Scan my library periodically" set to daily, "Empty trash automatically" unchecked, and "Generate video preview thumbnails", "Generate intro video markers", "Generate chapter thumbnails", "Analyze audio tracks for loudness", and "Analyze audio tracks for sonic features" all set to never.

I also have "Update all libraries during maintenance", "Upgrade media analysis during maintenance", and "Perform extensive media analysis during maintenance" all unchecked.

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.11.0-44-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

No command in specific, was just trying to scan in files from Plex via my rclone mount.

The rclone config contents with secrets removed.

My rclone config:

Custom client ID and password pair, one team drive per character of the alphabet, merged together via union at my mountpoint.

My systemd mount script, pretty much a copy and paste of the popular one from here with some commands tweaked to suit my drive storage for cache (on a side note, is a cache necessary if you're just streaming via Plex?):

A log from the command with the -vv flag

No command. I do have a debug log but it's 14GB big and I have very bad peering with my server provider so it'll take a bit to chop down to a consumable size and pull the logs but it should have something pertinent to this.

The api quotas aren’t useful as you are not hitting any API limits.

Google has daily download quotas that aren’t published and it’s a bit of guess work.

How much are you downloading? Edu account or shared with anyone else?

How big is the library?

How much are you downloading?

I'm only "downloading" the extent at which to load the seasons and movies into my Plex for the first time. I just moved my Plex server from my local in-house server to a remote server this week and I've done this exact scan in the past with no issues.

Edu account or shared with anyone else?

Google Workspace. No one else accesses the drive. The Plex server is the only thing interacting with these files in any capacity.

How big is the library?

~9k movies, ~2k TV series.

That seems not too bad but not much to do other than wait for a reset unfortunately.

Not sure a log would add much either. As long as you are sure of the version, that all looks good.

Using the new Plex agents as well for the libraries as well?

I moved away from Google to avoid the undocumented limits and no help from their support.

You could try throttling yourself a bit while you scan and see if that helps.

That seems not too bad but not much to do other than wait for a reset unfortunately.

I don't mind having to wait for the reset. I'm just not trying to deal with this for the third day in a row tomorrow.

Using the new Plex agents as well for the libraries as well?

Latest version of Plex Media Server, using "Plex Movie" and "Plex TV Series" for the scanner and agents.

You could try throttling yourself a bit while you scan and see if that helps.

Any recommendations for what to change on my mount for that?

Perhaps try to limit the per file:

--bwlimit-file 5M

Or something along those lines.

I'm not aware of any way to throttle Plex as it just kind of does its thing.

Do you have the "Perform extensive media analysis during maintenance” checked? (settings-> scheduled tasks). If so uncheck it.

Same goes for “Generate intro video markers” (library tasks)

I think you replied to me when you wanted the OP.

In the first post, the OP confirmed those are unchecked.

My aplogies, cant believe iI missed that. :frowning:

@Snaacky
Do you by any chance have some indexing running on ur system ( by the OS) ? And have you moved the plex cache to a local disk / verified that your union mount is correctly using a local disk for writes.

And do you use the same workspace account for uploading ? I would also just use 1 team drive until second is needed. You can keep the same structure with union. Minimize your api calls as much as you can. Im running my setup with pretty much the same settings as @Animosity022 without hitting any limits on a 700 tb lib (jellyfin, mergerfs, g workspace). - somehow

edit: Do you run plex in docker / podman by any chance? If so try to mount the volume as read only.
edit2: Sonarr and Radarr is setup to store the metadata on local disk right ? (limits are based on interval, so might be that something else has used up most of your quota, and ur scan using up whats left)

I would disable this, as it doesn't do what you want it to. If anything, this will trigger a full scan when changes are detected (if they are detected at all).

This is OK to leave checked, but since you're scanning everything in on a brand new server, it won't be particularly helpful.

Why??? :wink: My guess is you're planning ahead in case you ever hit the file limit. Why use team drives at all, though?

It's not necessary, but it might help, if you have the space for it (I don't on my remote server).

This makes me wonder why you picked that provider to begin with. Bad peering results in higher latency, which is the number one issue for users of remote hosts (in my opinion). Have you pinged your server to see how high it is on average? My latency these days is mostly in the mid 70s, which is not bad, considering the distance, but once it goes above 100, streaming is usually in trouble.

Same here, but the difference is that OP is setting up a brand new Plex server and scanning in media for the first time. The last time I did this was in mid-2017 when there was no upload/download quota (the good old days of GSuite). Having said that, I don't see why you would run into these kinds of issues now, because what does a fresh media scan-in do any differently than, say, a metadata refresh combined with "Upgrade media analysis during maintenance" (something I do nightly)? Especially with new new agent/scanner this shouldn't be a problem. Google works in mysterious ways, though. Wonder if there would be a difference if OP did this on a regular drive instead of team drives...

To the best of my knowledge, the quotas are per user so more or less team drives shouldn't matter.

I never used shared/team drives myself though.

I spent a few days on a support ticket with Google and they basically will refuse to tell you what you tripped and only say the canned responses.

I thought so, too. Never used anything other than a regular drive either. Thanks to you and others here, I never had to contact the big G :stuck_out_tongue:

I only did contact them once I was migrating as I was trying to figure out what limit I tripped and they won't tell you a thing other than wait 24 hours and try again, which is baffling.

Same here, but the difference is that OP is setting up a brand new Plex server and scanning in media for the first time.

That does include the initial scan. I just fired up a new podman instance to test if it can complete without hitting any limit. So far so good. However, I'm using jellyfin and not Plex.

@Animosity022

To the best of my knowledge, the quotas are per user so more or less team drives shouldn't matter.

26+ remotes being called at once by same user (depending on the union settings / fs calls) ?

@Snaacky
Could always just import one drive at a time also

It’s not about the API though as the more remotes just make more API calls and that is not the issue.

It’s about the downloads.

@WawP6

Do you by any chance have some indexing running on ur system ( by the OS) ?

Not that I'm aware of. It's a relatively fresh Ubuntu install.

And have you moved the plex cache to a local disk / verified that your union mount is correctly using a local disk for writes.

The cache is my NVMe. Seeing as it's currently 149G, I'm inclined to believe it's working.

edit: Do you run plex in docker / podman by any chance?

I used to until one day I came back and my data folder had wiped itself. Now I keep Plex and rclone on the host OS and Dockerize everything else like *arr services which should be storing data in their respective container data folders.

@VBB

I would disable this, as it doesn't do what you want it to. If anything, this will trigger a full scan when changes are detected (if they are detected at all).

Thanks, disabled.

This is OK to leave checked, but since you're scanning everything in on a brand new server, it won't be particularly helpful.

Leaving it disabled for the time being while trying to scan in.

Why??? :wink: My guess is you're planning ahead in case you ever hit the file limit. Why use team drives at all, though?

Mostly just because other DDL drives I've seen have followed a similar setup. I figured it would also look a bit better if I didn't have nearly 500TB in one TD. I used to have it in one TD but I'm not entirely sure that would make a difference here.

This makes me wonder why you picked that provider to begin with. Bad peering results in higher latency, which is the number one issue for users of remote hosts (in my opinion). Have you pinged your server to see how high it is on average? My latency these days is mostly in the mid 70s, which is not bad, considering the distance, but once it goes above 100, streaming is usually in trouble.

It's not that the ping is too bad, it's about ~100ms. The throughput on my Hetzner server is just not that great because of NA peering so I threw a OVH reverse proxy in front of it who peers directly with Hetzner so my Hetzner->OVH is nearly 1:1 and then OVH->Me offers higher throughput.

As for why I picked it, it's used for other stuff but there's a ton of left over resources so decided to throw a Plex server on it, but not like you can find an i7-7700k (with iGPU for transcoding), 32GB of RAM, and a 1 Gbps port for $35 anywhere else.

@Animosity022

I think throttling the bandwidth per file may have helped slightly. It scanned for about 5 hours (so about 300 TV series because the Plex TV scanner is abysmally slow compared to the movie scanner for whatever reason) before throwing the 403 download exceeded again. So seems like it may have improved it but certainly didn't fix it and I am once again throttled. :smiley:

1 Like

It’s not about the API though as the more remotes just make more API calls and that is not the issue.
It’s about the downloads.

Your most likely correct. However, Google has a tendency of being overly ingenious in their quotas. I would eliminate as much as I could to hunt down the bug for the sake of troubleshooting. - But prob other steps that I would check first.

@Snaacky

I used to until one day I came back and my data folder had wiped itself. Now I keep Plex and rclone on the host OS and Dockerize everything else like *arr services which should be storing data in their respective container data folders.

Sounds like soe misconfiguration with your volume mappings. For the sake of troubleshooting, could you paste your previous plex config (docker). One benefit with dockerizing it is the easier mapping with paths. I would test to fire up a scan with read only permission to see if you trigger the same limit

Could you post your mergerfs config also ?

Sounds like soe misconfiguration with your volume mappings. For the sake of troubleshooting, could you paste your previous plex config (docker). One benefit with dockerizing it is the easier mapping with paths. I would test to fire up a scan with read only permission to see if you trigger the same limit

Not sure what happened. It worked for months on my local setup. When I moved it over to my remote setup, it worked for a few days and then I woke up one day and my /config folder was 4KB and all my metadata was gone.

I need to remake my docker-compose.yml for Plex but it would probably be something like this:
https://pastebin.com/raw/fD7Kd7QE

Could you post your mergerfs config also ?

My understanding of mergerfs is pretty basic, I've only ever really used it to merge multiple drives into one mount point so my files would get split evenlyish across a few drives. I don't really understand the use of mergerfs in this setup. Perhaps you could explain how it fits into this? If it helps any:

  • My Docker containers are stored at /srv/containers/container_name
  • My Docker containers data is stored at /srv/containers/container_name/config
  • My rclone mount is stored at /mnt/rclone/remote_name
  • My pre-upload files are stored at /srv/downloads/nzbs and /srv/downloads/torrents
  • My cache is stored at /mnt/nvme1/cache

On another note, I tried merging everything into one TD and tried minimizing the folder count as much as possible but didn't help and throttled again until tomorrow. I guess the next step is to re-dockerize my Plex server and see if read-only fixes it.

. I don't really understand the use of mergerfs in this setup. Perhaps you could explain how it fits into this?

Its the same as what you do with the union with rclone (which is inspired by mergerfs).

whith mergerfs u can take:

  • remote A
  • remote B
  • ...
  • remote Z
  • nvme disk

And present it all all at a mountpoint such as f. ex: /srv/media (which u then mount in docker / plex). The important part here is that you configure it so that your nvme disk is first for writes and such. (prioritize from fastest to slowest, not other way around). -But I forgot you used rclone union, so dont bother with this.

Anywho, you could also just add "--read only" to your rclone config and fire up a scan to see if you get the same problem. -or test with docker.