For the last two days, while trying to scan my Google Workspace based media library into my Plex server, I've hit some hidden API quota and had all of my files locked as "Download quota exceeded" across all my team drives and am trying to figure out why this is happening.
The files will open and play after the API ban expires just fine up until about ~20m into scanning where everything just shits out again. The same thing happened yesterday so some files did scan in and I was able to play the files that had scanned in perfectly fine before starting the scan back up again to finish it and had this issue reappear for the second day in a row.
I've taken all the rclone+Plex precautions like disabling analysis in Sonarr and Radarr and disabling things like video preview thumbnail generation and intro detection on a per-library basis.
I have "Scan my library" unchecked, "Run a partial scan when changes are detected" checked, "Scan my library periodically" set to daily, "Empty trash automatically" unchecked, and "Generate video preview thumbnails", "Generate intro video markers", "Generate chapter thumbnails", "Analyze audio tracks for loudness", and "Analyze audio tracks for sonic features" all set to never.
I also have "Update all libraries during maintenance", "Upgrade media analysis during maintenance", and "Perform extensive media analysis during maintenance" all unchecked.
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.11.0-44-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none
No command in specific, was just trying to scan in files from Plex via my rclone mount.
My rclone config:
Custom client ID and password pair, one team drive per character of the alphabet, merged together via union at my mountpoint.
My systemd mount script, pretty much a copy and paste of the popular one from here with some commands tweaked to suit my drive storage for cache (on a side note, is a cache necessary if you're just streaming via Plex?):
No command. I do have a debug log but it's 14GB big and I have very bad peering with my server provider so it'll take a bit to chop down to a consumable size and pull the logs but it should have something pertinent to this.