Issue full rescan after renaming movies/tv show folders - downloadQuotaExceeded

Hi,

it's been more than a year that i use rclone now, with differents settings and stuff, i'm using cloudbox feederbox/mediabox on 2 differents servers

I got a merger FS on mediabox (where plex is) that merge local/feederbox (remote rclone ftp local files)/gdrive rclone
in /mnt/unionfs/
So files still not uploaded to gdrive can still be accessible instantly on plex via the remote

on feederbox there is a merger FS with local/gdrive rclone in /mnt/unionfs too

Just renamed a bunch of my TV/movies folders to include the tmdb and tvdb ids in the folders name via sonarr/radarr and I restarted a full scan and it's been 4 days (260Tb of data) that the following message pop after some time each days (slowing the process), bazarr is rescanning too
So both of them are doing a lot of mediainfo (ffprobe) requests, which is not doing a lot of traffic but i get thoose messages.

"vfs cache: failed to download: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded"

When the reset happens it's fine for like 10 hours, then i hit some kind of limit, everything is fine in the API calls on the google dashboard. the servers do about 1.5Tb of combined download before i get this message (limit is 10Tb on download as i read everywhere) while my feederbox uploads for 750Gb everyday at the moment.

I even have a vfs priming service that runs every 167h

I read multiple times here that nobody experience issue when fully scanning so i'm a bit confused here on what is happening ?

I do have all the settings that are not recommended ticked off in all of the softwares, but still hitting something.

I want to be able to add emby too in the next weeks and scan, but now i'm afraid it is also going to blow everything for like a week... :confused: just like what plex is doing right now

What is your rclone version (output from rclone version)

rclone v1.56.2

  • os/version: ubuntu 18.04 (64 bit)
  • os/kernel: 4.15.0-123-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.8
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google drive

The rclone config contents with secrets removed.

[gd]
type = drive
client_id = xxxxxxxxxxxxxxxxxxxxx
client_secret = xxxxxxxxxxxxxxxxxxxxxxxxxxx
token = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
root_folder_id = xxxxxxxxxxxxxxxxxxxxxxxxx

[google]
type = crypt
remote = gd:
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxx

service rclone Execstart (same on both servers, feederbox is limited to 4 tpslimit tough so i can't go over 10 for the sum of the 2 servers):

ExecStart=/usr/bin/rclone mount \
  --user-agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' \
  --config=/home/seed/.config/rclone/rclone.conf \
  --allow-other \
  --rc \
  --rc-addr=localhost:5572 \
  --umask 002 \
  --dir-cache-time 5000h \
  --attr-timeout=5000h \
  --poll-interval 10s \
  --cache-dir=/cache \
  --vfs-cache-mode full \
  --vfs-read-ahead 2G \
  --vfs-cache-max-size 150G \
  --vfs-cache-poll-interval 5m \
  --vfs-cache-max-age 5000h \
  --log-level=DEBUG \
  --stats=1m \
  --stats-log-level=NOTICE \
  --syslog \
  --tpslimit=6 \
  google: /mnt/remote

for some reason i also noticed that the /cache folder got back to a few Mb a few moments ago, not sure why, so i just added the : --vfs-cache-max-age 5000h \

A log from the command with the -vv flag

Oct  3 14:10:49 s165879 rclone[1502747]: Media/TV/South Park (1997) [tvdb-75897]/Season 08/: ReadDirAll:
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Life Is a Long Quiet River (1988) {tmdb-4271}/Life.Is.a.Long.Quiet.River.1988.HDTV-1080p.x264.DTS-HD.MA[FR].[FR]-RADARR {imdb-tt0096386}.mkv: vfs cache: downloader: error count now 4: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Life Is a Long Quiet River (1988) {tmdb-4271}/Life.Is.a.Long.Quiet.River.1988.HDTV-1080p.x264.DTS-HD.MA[FR].[FR]-RADARR {imdb-tt0096386}.mkv: vfs cache: failed to download: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Life Is a Long Quiet River (1988) {tmdb-4271}/Life.Is.a.Long.Quiet.River.1988.HDTV-1080p.x264.DTS-HD.MA[FR].[FR]-RADARR {imdb-tt0096386}.mkv: ChunkedReader.RangeSeek from -1 to 0 length -1
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Life Is a Long Quiet River (1988) {tmdb-4271}/Life.Is.a.Long.Quiet.River.1988.HDTV-1080p.x264.DTS-HD.MA[FR].[FR]-RADARR {imdb-tt0096386}.mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 0 chunkSize 134217728
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Life Is a Long Quiet River (1988) {tmdb-4271}/Life.Is.a.Long.Quiet.River.1988.HDTV-1080p.x264.DTS-HD.MA[FR].[FR]-RADARR {imdb-tt0096386}.mkv: ChunkedReader.openRange at 0 length 134217728
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Wonders of the Sea 3D (2017) {tmdb-476070}/Wonders.of.the.Sea.2017.Bluray-1080p.x264.AC3[FR+EN].[FR+EN]-THREESOME {imdb-tt5495792}.mkv: vfs cache: downloader: error count now 4: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Wonders of the Sea 3D (2017) {tmdb-476070}/Wonders.of.the.Sea.2017.Bluray-1080p.x264.AC3[FR+EN].[FR+EN]-THREESOME {imdb-tt5495792}.mkv: vfs cache: failed to download: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Wonders of the Sea 3D (2017) {tmdb-476070}/Wonders.of.the.Sea.2017.Bluray-1080p.x264.AC3[FR+EN].[FR+EN]-THREESOME {imdb-tt5495792}.mkv: ChunkedReader.RangeSeek from -1 to 0 length -1
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Wonders of the Sea 3D (2017) {tmdb-476070}/Wonders.of.the.Sea.2017.Bluray-1080p.x264.AC3[FR+EN].[FR+EN]-THREESOME {imdb-tt5495792}.mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 0 chunkSize 134217728
Oct  3 14:10:49 s165879 rclone[1502747]: Media/Movies/Wonders of the Sea 3D (2017) {tmdb-476070}/Wonders.of.the.Sea.2017.Bluray-1080p.x264.AC3[FR+EN].[FR+EN]-THREESOME {imdb-tt5495792}.mkv: ChunkedReader.openRange at 0 length 134217728
Oct  3 14:10:49 s165879 rclone[273422]: Media/TV/A Haunting (2005) [tvdb-79535]/: >Lookup: node=Media/TV/A Haunting (2005) [tvdb-79535]/Season 10/, err=<nil>
Oct  3 14:10:49 s165879 rclone[273422]: Media/TV/A Haunting (2005) [tvdb-79535]/Season 10/: Attr:
Oct  3 14:10:49 s165879 rclone[273422]: Media/TV/A Haunting (2005) [tvdb-79535]/Season 10/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxr-x, err=<nil> 

Thanks if you can help !

I'm sure @Animosity022 and @asdffdsa will chime in here as well, but out of curiosity, are you using the new Plex scanner/agent or the legacy ones?

I recently switched to the new one and triggered a full scan and metadata refresh on all my libraries (~830 TB) without running into any issues. One huge advantage of the new combo is that it scans so much faster. My movie library with a little over 15k items used to take almost three hours. Now it's down to a couple of minutes.

I even have Plex run "Upgrade media analysis during maintenance" every night, which re-analyzes about 4,200 items in seven hours.

Anyway, none of this might be relevant to you, but I thought I'd tell about my experience. Here is my mount command, which is always followed by a prime:

rclone mount --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --poll-interval 0 --rc --read-only --user-agent ******* --vfs-read-chunk-size 1M -v

Perhaps the --vfs-read-chunk-size 1M would help in your case. Note that I do not use a cache.

hi,

gdrive and plex scanning in the forum, that is all on you...

--vfs-read-chunk-size 1M
did you recently add that to your command?

are you sure the combo --vfs-read-chunk-size and --vfs-cache-mode=off does anything?
if my internal memory is not corrupt, based on testing, that combo did nothing.

the documentation seems to imply in that combo should do something as --vfs-read-chunk-size is documented above and seperate from --vfs-cache-mode

I'm pretty sure --vfs-read-chunk-size works independently from --vfs-cache-mode. I added that flag a while ago when I first started messing with re-scans, even before I switched to the new scanner/agent combo. Was going to remove it once I was done, but I decided to keep it for as long as I'm running the overnight analysis. @Animosity022 tested this in the past, and he mentioned that a smaller chunk size simply makes things start up a little more slowly. I haven't noticed any negative impact.

Hi,

thanks for the replies i'll try to use --vfs-read-chunk-size 1M and see what happens

I use the new scanners since beta :slight_smile:
for me too it was only taking a few minutes, but i have renamed all the series folders and all movies folders so i has to do a full analysis again :confused:

we will see how it goes once the ban lift

Is it an edu account or a GSuite account?

There's not much to do with the limits as something seems to be triggering it.

Smaller chunk shouldn't matter too much as 1M just means more API traffic as it has to ramp up when it streams.

I don't believe that impacts your quota in anyway as I've tested before opening/closing a file a few thousand times and never hit any quotas so in my opinion, it has to be something else. Hard part is none of this is actually documented so it's a lot of guessing.

Hey, it's a full GSuite account i own.

I only have radarr/2xsonarr doing scans no analysis, refresh manual etc.
bazarr has to scan once for ffprobe
plex too then via autoscan.

Thanks

No odd settings on in Plex? Turn on Intro analysis or something recently? Change agents in a library? Change time zone a server? Something goofy like that usually triggers downloads.

The challenge is there is zero way to check download quota as it seems to be per file / per user / odd other limits. The API graphs don't really give you any data as you are not hitting an API issue.

unfortunately nothing has changed (i had intro analysis on before, i turned it off for the rescan), only the full rename of all my tv shows and movies folder to include the tvdb and tmdb id in the name for the new scanner quick match.
i had no issues whatsoever before that, it's only the full rescan that is causing my troubles, when it's been scanned there is no more issue.
I tried to add emby last year and ran into the same issues, same with bazarr at first while it was scanning the files.

I'm wondering only how can i reach any kind of limit in like 10 hours when some people can do a full initial scan during days with no issues :confused:

A full rename would be a very big change as Plex (I believe) detects that a brand new file. You can validate in the Plex logs once you get reset as it says something like 'renaming' or something along those lines (it'll be obvious it's changing the path).

Not sure what other folks mean, but a rescan to me would be deleting a library and adding it back fresh.

Changing agents or refreshing metadata on a file happens super fast with the new agents as it doesn't grab nearly as much anymore.

Yes indeed, what i did is like starting fresh, not refreshing metadata.
But i have read multiple time on this forum and reddit that lots of folks don't have any issue doing this
You too i believe in some posts said so too

I personally have rescanned my entire stuff in a day or two with never any issues. It’s tough as I know every setting and detail for my setup and I can watch bandwidth.

I’m the only person in my setup as well so I am certain what happens. I’ve never hit a download quota.

Well it's about the same for me right now, cause it basically remove the quota at night and blocks again before any of my friends or me can play anything.
I do know every settings are ok but i'll try to disable bazarr too for now as it may not help while the init scan finishes.

Other than that for the settings of the mount you don't see anything that could improve anything :confused: ?

Thanks

It isn't going to be a mount setting that fixes it as we'd want to figure out what application is causing the downloads.

Do you see a lot of bandwidth going on? That might tip you off if something is causing a lot of downloads.

The GSuite Audit log will show what was downloaded, but it's a bit tricky as you will see the crypt name and each request is a 'download'. If you see a huge number of items though, that might also tip you off that something is downloading a lot.

Thanks i'll have a look at that, for now i stopped bazarr, so only plex does scans really

hum it's been more than 24h without any quota exeed limit, since i shut down bazarr, i dunno why it makes the quota exceed, it is also doing a ffprobe on all files, but i don't exceed any know limit, so it's weird, also plex and bazarr are on 2 different servers.
Well i'll reactivate stuff slowly after plex will have completed to fully rescann all (which he still is not close to complete) i'm almost at a full week for the rescan and it's not complete

What is the size of your library roughly? How many Movies/Episodes? Are you on a slow Internet connection? Just curious why it would take that long. The new agents seemed super fast.

Hello,

The library size is as at 232Tb

about 120.000 tv show files + 15.000 movies
Server is with 1Gbits link

Yeah i have no clue too, but woke up this morning and it's almost finished, it took about 6 to 7 days to rescan it all :confused:

Also, i see that my server has downloaded 1.6Tb from the google drive today without any ban, so i can't explain why having 2 servers (1 with plex, the other with bazarr) asking for ffprobes on files, respecting the 10queries/s lead to a ban. But since i shutdown bazarr, no issues.

It's not a ban as you are hitting a download quota limit. Ban would mean your account is gone.

You are not hitting an API limit either so the API hits are not really the issue as that's all available in the Google Admin console and if you pound the API, you get rate limited so no harm no foul really.

Are you using the new plex Movie/TV agents? For that number, 6-7 days seems too long. With the new agents, I can rescan my stuff in less than a few hours at 170TB / 5K movies / 51k TV episodes.

I did a refresh all metadata a few times and I'm still shocked at how fast the new agents are as that only takes a few minutes on the Movies and maybe minutes on the TV shows.

I wonder what would be trigger it in Bazarr though as that seems strange. Not sure what setting I can think of offhand that would do that but I really don't use Bazarr much other than here/there for forced subs.

If it stays consistent though, you might have your smoking gun at least.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.