Download Quota Exceeded & VFS

I've been lurking here for a while and I see the same repeated misconceptions, so I want to give some clarification and correction on downloadQuotaExceeded.

VFS-Read-Chunk causes rclone to download the file in pieces, which means several drive.files.get calls, which in turn means that the file is downloaded each chunk. If you look at admin.google.com and go to reports > drive, you'll see "User downloaded XXXXXX.yyy" several times in a row. This adds to the UNPUBLISHED file download quota! You have 1 billion API calls a day, but that doesn't mean you can download 1 billion times a day. API quota != download file quota.

Lately, I've been hitting file download quota exceeded a lot lately, almost daily for the past 2 weeks.
I believe google lowered the download quotas. in part because of abuse, unfortunately rclone is a problem. That would explain why google rate-limited rclone a few months ago. rclone "downloads" the file several times during playback.

The VFS system needs a way to only download the file once, so it's only 1 download counted against the quota, not the 10-50 times I see reported in the audit reports. VFS-Cache-Mode-Full downloads the file at least twice, and has problems playing with plex. I use mergerfs, maybe I need to add my /cache/vfs/drivename to the pool?

You may not run into this issue yet, I certainly didn't for almost a year. Now my library is over 180TB and the 12 hour scans of sonarr and radarr cause the files to be downloaded multiple times as well (I have analyze videos on, never had this issue until this month).

To stop unnecessary scanning and mediainfo downloads:

  1. Turn off plex automatic and daily scans, keep partial scan on, use sonarr/radarr connect feature to notify plex (I had this set for a long time now)

What I did last week:

  1. Upgrade to Sonarr V3 and Radarr V2 (new UIs). They include a new setting under media management (advanced settings) to disable disk scans when refreshing series. I set it to "only on manual refresh". So now when sonarr/radarr refreshes the series, it won't rescan all the files.
    As long as plex and sonarr/radarr are up and setup appropriately, nothing should get out of sync where you need to manually scan.

This has reduced the overall files downloaded every day, but has not resolved the file download quota bans.

After initial scans, it doesn't make much sense to have a --vfs-read-chunk-size lower than 64M, 128 is a good middle ground to 256M. The lower vfs-read-chunk-size, the more drive.files.get = the more download file calls. Remember, the number of API calls is not the problem, instead it's the type of api call. vfs-read-chunk-size calls drive.files.get several times to get the chunks. Each call = 1 file download. The limit may be 100 per 24h, but it's unpublished, it could even be 50.

Unfortunately, I'm still running into the download quota exceeded ban. Other misconceptions about this ban:

  1. it has nothing to do with overall api calls, or 100 calls per limit, tpslimit, etc.
  2. In most cases, it's not the 10tb download daily limit.
  3. It's a separate unpublished limit per file. There also seems to be another quota, total max downloads allowed per day.
  4. It's directly related to how many times a file is downloaded. VFS read chunk size = 1 download call. So 128M = 1 download, the next chunk is another download, etc. As the file is played and more chunks are downloaded, those are each 1 download to google. You can verify this in your admin drive audit report.

It seems vfs-cache-mode=full does not prevent multiple file downloads. In my experience it sends up stalling plex for a few minutes instead, even after the file is downloaded and in my /cache directory within 2 seconds of starting playback. I plan on testing playback if I put the /cache/vfs/tdrive directory in my mergerfs.

For reference here is my rclone setup: Yes I know some of these are repeating defaults, I keep them here for easy tweaking and testing. I typically remove the vfs-read-chunk vars if I setvfs-cache-mode=full

[Unit]
Description=tdrive.service
Wants=network-online.target
After=network-online.target

[Service]
Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf
Type=notify
ExecStart=/usr/bin/rclone mount tdrive: /mnt/tdrive/remote \
--config="/opt/rclone/rclone.conf" \
--log-file="/opt/rclone/logs/mount.log" \
--user-agent="myagent/v1" \
--dir-cache-time="72h" \
--drive-chunk-size="128M" \ # this does have an affect is vfs-cache-mode>= writes, see replies below
--vfs-read-chunk-size="128M" \
--vfs-read-chunk-size-limit="off" \ # if you don't specify this, then it will just do 128M chunks, which would be really bad as it would be more file downloads.
--vfs-cache-mode="off" \ # I also tested with full, in doing that the read-chunk settings should be ignored, currently i have this on full.
--vfs-cache-max-age="72h" \
--cache-dir="/cache" \
--buffer-size="0" \ # the buffer seems to cause buffering issues on a fast 1gpbs connection, didn't notice a difference when the buffer-size, except with 0. Buffer 0 is recommended when using vfs-cache-mode=full
--log-level="NOTICE" \
--allow-other \
--fast-list \ #  The docs say it affects stuff like ls calls, which a mount does. See replies below for more info.
--drive-skip-gdocs \
--timeout=1h \
--tpslimit=8
ExecStop=/bin/fusermount -u "/mnt/tdrive/remote" # for some reason this fails for me on systemctl restart, but calling it manually (without sudo) works fine...
Restart=on-failure
User=1000
Group=1000

[Install]
WantedBy=multi-user.target

[Unit]
Description=mnt-tdrive-merged.mount
After=tdrive.service
RequiresMountsFor=/mnt/tdrive/local,/mnt/tdrive/remote

[Mount]
What=/mnt/tdrive/local=RW:/mnt/tdrive/remote=NC
Where=/mnt/tdrive/merged
Type=fuse.mergerfs
Options=async_read=false,use_ino,allow_other,category.action=all,category.create=ff,func.getattr=newest

[Install]
WantedBy=multi-user.target

/mnt/tdrive/remote #rclone mount
/mnt/tdrive/local # where local files will be stored until upload, matches dir structure of remote
/mnt/tdrive/merged # the merged view that plex uses.

So I have 2 issues, downloadquotaexceeded because of vfs-read-chunks
vfs-cache-mode=full stalling plex, takes minutes and stopping/playing for it to play. It ends up downloading the file 4 times, probably analyzing and downloaded the entire file 4 times.
Even after file is in the cache, plex playback is stalled.

I'm going to try to put /cache/vfs/tdrive into the mergerfs pool after the local drive and make that RO, just to see if that makes a difference for plex.

It honestly seems like the vfs mode needs some work.
It needs to detect when a file is already in the process of being downloaded and re-use that instead of making a 2nd download call.
Ideally, google api permitting, it needs a better way to do the read chunks so it's just a single file download call, but still read in chunks. Gdrive website and gfs is able to do this....

Here is a part of my audit log:

The highlighted item is a video i viewed and seeked a few times on the gdrive website. Notice how it says viewed item and the rclone ones say download.

Notice how Chernobyl was downloaded 4 times. I have vfs-cache-mode=full on, it still made 4 download calls at the same time. VFS-Cache-Mode=Full was tested on everything after secret life of pets 2. All of the files listed, playback was started once and stopped once it started playing. Secret life of pets 2 has vfs-cache-mode=off. 9:12am and 9:20am were different playbacks.

Secret life of pets was worse, more download calls during playback with cache mode off.

I may try testing the rclone cache wrapper to see if this solves the multiple downloads.

It seems from the audit log, cops and alternatino only downloaded twice. True Detective 4 downloads with cache-mode-full. I'm guessing file analysis needed to run on that file.

Chernoybl is being played with transcoding, so far still the original 4 downloads in the audit log and nothing else yet.

So it seems vfs-cache needs some work to capture consecutive reads on the same file.
There needs to be a stream cache mode,
vfs-cache-mode=stream. It will cache chunks as they are downloaded, not waiting for the entire file to be downloaded first.

Is there possibly a better way to do streams in the api? GDrive website seems to "view" a file when playing a video on the site (their transcode of it?). Seeking around doesn't cause additional entries in the audit log.

BTW using latest beta with the cache-fix:

rclone v1.48.0-227-g077b4532-beta

  • os/arch: linux/amd64
  • go version: go1.12.9
1 Like

Thank you - this contains a lot of good info. Love this sort of low-level details :slight_smile:

Wouldn't your problem chiefly be solved by just setting a very high initial vfs-read-chunk-size to reduce the number of "downloads"? I don't think this uses any additional memory, and I don't think the VFS needs to download a full chunk to start reading it (opening a stream seems just as fast and responsive with a high value). I must admit I do not clearly understand the downsides to a high value and would like to know more of the details... I think it may save you from downloading some unnecessary data sometimes, but only for non-streamable data?

But if you think the API could support a smarter way of chunking then that's definitely worth looking into I think. @ncw maybe this is relevant fro you to be aware of?

I am 99,9% sure this is false. I just retested it to make sure I'm not giving misinformation.
Mount test upload with 1M chunk size = terrible sawtoothing, TCP nevers gets to spin up to full speed, and transfer never reaches more than 40Mbit (out of 160Mbit). Same test with 256MB => very little sawtoothing and bandwith is utilized maybe 98-99% of theoretical max. This is a Gdrive remote setting and all of those tend to work via mount as far as I know.

I think he is right on this one. The reason is that all operations on a mount is called from the OS, and the OS has no concept of a --fast-list. The only way it knows to ask a drive for list information is to iterate through each folder and file. For rclone to translate these calls into a more efficient fast-list would require some sort of advanced caching and analysis of calls before they are actually executes. Pretty sure this does not exist, and I'm doubtful if it would even be possible to make this work gracefully with the OS (as it might stop sending calls until gets some responses back).

I have to say that I have not done practical experiments to verify this though - it just makes sense to me that it would not be compatible with how the mount operates. If you have done experiments that show otherwise, I'd like to see your results. NCW can probably just answer this trivially though if he stops by :slight_smile:

Lastly, I don't think cache backend will be a good solution for you. The cache backend will need to download a whole chunk to send it to be read. You need small chunks for it to react smoothly and play media especially, but I am sure these small chunks will be the same but worse problem. It would only help you save download calls in the sense of sometimes having data locally cached from before, but on a 180TB collection, even a 8TB drive of dedicated read-cache seems like it would be a drop in the ocean...

Context is important here as it matters. For my context, I was speaking in regards to vfs-cache-mode none, which is has no impact on uploads as it's not used. If you turn vfs-cache-mode writes, you see it is used.

I don't say this. @ncw does here -> https://github.com/rclone/rclone/issues/2542

vfs-cache-mode full should not be used with Plex streaming as it's counter productive. It has to download the full file for each read. So when you play something in Plex, it would fully download it 3-4 times.

Apologies for putting words in your mouth.

I disagree that it's counter productive. It will use between 2-4 calls, vs the 10+ it would use if it was set to off. Assuming a file is 8gb:

128 + 256 + 512 + 1024 + 2048 + 4096 = 2-4 more file downloads needed for playback.

It would be more if the file was bigger. It appears that with vfs-cache-mode=full it's between 2-4 downloads total. So to me that keeps running into this download ban, this is better.

To me it seems the 4 separate downloads is a bug, not a feature. It should just be 1 call with consecutive reads.

use full:

  • when library is already scanned + plex auto scanning is off + arr disk scanning >= manual only + arr connects to plex to tell it to scan.
  • when using the scheduled tasks for thumbnails. If you have this on, you must set vfs-cache-mode=full I only recommend a max 4 hour window. This option is only an issue for stuff uploaded, if you turn this one day 1 so it uses a local file for thumbnails, it's fine.

use writes:

  • when using a program like bazarr that needs it.

use off:

  • when you need to do an initial scan or disk refresh in Arr.

tidbits:
I started seeing downloadquotaexceeded when I decided to setup and scan emby + jellyfin at the same time. Even after the inital scan it kept happening.

I then discovered that emby/jellyfin will rescan every 12 hours, even if I turned off auto scan for changes in the library settings. Go to Scheduled Tasks > Library Scan to delete the 12 hour tasks.

Even after I did this, I still hit download quotas, perhaps it's more than 24 hours? maybe there's a weekly quota too or something...

I stopped jellyfin and emby, wasn't using them anyway. I did this after I found out it does mediainfo calls even on files it already analyzed, it does this every scan. Hence why i was getting banned.

It's been 3-4 days since I stopped emby and jellyfin and still got a downloadquota exceeded for about 6 hours last night and the day before.

Regardless of this, vfs needs to reduce the file download api calls, there must be a better way. I have been getting this ban occasionally even before adding jellyfin/emby to the mix, perhaps on the days where I did few full scans (before the new disk scan option) in sonarr/radarr.

Here is the log showing it's just a normal write:


[felix@gemini ~]$ rclone mount gcrypt: /Test -vv --drive-chunk-size 128M
2019/08/25 14:43:56 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "mount" "gcrypt:" "/Test" "-vv" "--drive-chunk-size" "128M"]
2019/08/25 14:43:56 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/25 14:43:57 DEBUG : Encrypted drive 'gcrypt:': Mounting on "/Test"
2019/08/25 14:43:57 DEBUG : Adding path "vfs/forget" to remote control registry
2019/08/25 14:43:57 DEBUG : Adding path "vfs/refresh" to remote control registry
2019/08/25 14:43:57 DEBUG : Adding path "vfs/poll-interval" to remote control registry
2019/08/25 14:43:57 DEBUG : : Root:
2019/08/25 14:43:57 DEBUG : : >Root: node=/, err=<nil>
2019/08/25 14:44:33 DEBUG : /: Attr:
2019/08/25 14:44:33 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxr-x, err=<nil>
2019/08/25 14:44:33 DEBUG : /: Lookup: name="file.txt"
2019/08/25 14:44:34 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2019/08/25 14:44:34 DEBUG : /: Lookup: name="file.txt"
2019/08/25 14:44:34 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2019/08/25 14:44:34 DEBUG : /: Create: name="file.txt"
2019/08/25 14:44:34 DEBUG : file.txt: Open: flags=O_WRONLY|O_CREATE|O_EXCL
2019/08/25 14:44:34 DEBUG : file.txt: >Open: fd=file.txt (w), err=<nil>
2019/08/25 14:44:34 DEBUG : /: >Create: node=file.txt, handle=&{file.txt (w)}, err=<nil>
2019/08/25 14:44:34 DEBUG : file.txt: Attr:
2019/08/25 14:44:34 DEBUG : file.txt: >Attr: a=valid=1s ino=0 size=0 mode=-rw-rw-r--, err=<nil>
2019/08/25 14:44:34 DEBUG : &{file.txt (w)}: Write: len=131072, offset=0
2019/08/25 14:44:34 DEBUG : &{file.txt (w)}: >Write: written=131072, err=<nil>

Here is the one with drive chunk size:

rclone mount gcrypt: /Test -vv --drive-chunk-size 128M --vfs-cache-mode writes
2019/08/25 14:52:25 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "mount" "gcrypt:" "/Test" "-vv" "--drive-chunk-size" "128M" "--vfs-cache-mode" "writes"]
2019/08/25 14:52:25 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/25 14:52:26 DEBUG : Encrypted drive 'gcrypt:': Mounting on "/Test"
2019/08/25 14:52:26 DEBUG : vfs cache root is "/home/felix/.cache/rclone/vfs/gcrypt"
2019/08/25 14:52:26 DEBUG : Adding path "vfs/forget" to remote control registry
2019/08/25 14:52:26 DEBUG : Adding path "vfs/refresh" to remote control registry
2019/08/25 14:52:26 DEBUG : Adding path "vfs/poll-interval" to remote control registry
2019/08/25 14:52:26 DEBUG : : Root:
2019/08/25 14:52:26 DEBUG : : >Root: node=/, err=<nil>
2019/08/25 14:53:26 DEBUG : Google drive root 'media': Checking for changes on remote
2019/08/25 14:54:26 DEBUG : Google drive root 'media': Checking for changes on remote


2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: >Write: written=131072, err=<nil>
2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: Write: len=131072, offset=104726528
2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: >Write: written=131072, err=<nil>
2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: Flush:
2019/08/25 14:54:31 DEBUG : file.txt(0xc0000a87e0): close:
2019/08/25 14:54:31 DEBUG : file.txt: Couldn't find file - need to transfer
2019/08/25 14:54:32 DEBUG : hvbduq4fcbsohe2qf4ea08qf04: Sending chunk 0 length 104883232
2019/08/25 14:54:37 INFO  : file.txt: Copied (new)
2019/08/25 14:54:37 DEBUG : file.txt: transferred to remote
2019/08/25 14:54:37 DEBUG : file.txt(0xc0000a87e0): >close: err=<nil>
2019/08/25 14:54:37 DEBUG : &{file.txt (rw)}: >Flush: err=<nil>
2019/08/25 14:54:37 DEBUG : &{file.txt (rw)}: Release:
2019/08/25 14:54:37 DEBUG : file.txt(0xc0000a87e0): RWFileHandle.Release nothing to do
2019/08/25 14:54:37 DEBUG : &{file.txt (rw)}: >Release: err=<nil>

You can see it uploaded it via one chunk on the second command respecting the drive-chunk-size while in the first, it ignored it.

If you are getting download exceeded, you have an old version of rclone in there and something older than mid 2018 when vfs chunked reading was introduced.

I think there is confusion in API calls vs download quota? API calls are handled via the quotas page and you can get 10 per second by default. If you exceed that, it will backoff and no harm / no foul really.

If you hit download or upload quota, you get a similar 403 and can't do something. Upload is 750Gb per day. Download is 10TB per day.

Back pre 2018, there was no chunked downloading so you'd quickly hit the 10TB quota because each 'touch' of a file caused it to look like you downloaded it fully. Now, you only get tagged for the request you make.

If you have vfs-cache-mode full and went to a play 70GB file, it has to download 70GB before it can start playing in Plex. That's what I was saying is counter productive to playing a file in Plex with full.

The cache backend is indeed slower.

VFS caching is actually useful for most people. Just last night 2 people were playing the same video at the same time. Sometimes I see the same video played, at different times, by multiple people in a week. I don't have a big disk, but I don't mind having a cache that holds files for a week. Additionally having the file cached would needed if you turned on video thumbnails. The number of file downloads caused by video thumbnails is why it's not recommended, but if the file is cached locally, that isn't an issue anymore.

For me, it's not about saving bandwidth, it's about saving files.get quota, which seems to have been changed to be lower (at least for me). I haven't hit the 10tb limit. So I can afford a bit of waste on that quota. Perhaps google sees 1 file download = full size of file, even if only 128M (first chunk) is requested? If that's the case, then a change to request the whole file instead of the chunk size, would resolve the repeated download calls. This downloadquotaexceeded quota could just be a 10tb quota limit...so google thinks you downloaded 10tb worth of files, but really it was just a bunch of 128M chunks?

Ideally to solve this issue:
VFS needs to read in chunks and download only the data read, but to google it's just 1 download call.

An API call to drive.files.get != a full download of the file. You will see in the GSuite audit a log a file 'download' for each 'get' but that does not count against your download as you are only getting a chunk of the file.

Chunked reading with the VFS was a big thing for rclone to get for GD back in mid 2018 as that's why the cache backend was good prior because it had chunked reading much like plexdrive and the other providers out there.

If you want to read up on it, you can check out this thread as it has a lot of detail on chunked reading:

An API call to drive.files.get != a full download of the file. You will see in the GSuite audit a log a file 'download' for each 'get' but that does not count against your download as you are only getting a chunk of the file.

I understand that, but it does count against your download. Your assumption is incorrect in my findings, you may only be downloading 128M or 1G out of 50G, but it's still counted as 1 file download. Every XXX has downloaded XXXX file entry in the audit log counts against the filedownloadquota limit. It doesn't matter if it was 128M chunk or a 4G chunk, each entry = 1 download towards the quota.

There's a 10 TB limit, but there is also a drive.files.get call limit as well, and since vfs-read-chunk-size calls drive.files.get multiple times when a file is read, it does cause downloadquotaexceeded. For non plex users, this is a hard limit to reach for the most part. It increasingly becomes more common as your library grows, especially if you have sonarr/radarr doing it's default 12 hour scans with analyse media on. This is because sonarr/radarr call mediainfo on every file every 12 hours unless you change the settings. Additionally, plex calls mediainfo on files it hasn't analyzed in a while and at minimum is 4 api calls for just a mediainfo call. During playback, for a 60gb file (not uncommon in my lib) at 128MB read chunk with unlimited growth, that's (assuming the user plays once and doesn't stop...) about 10 downloads (as google sees it). I see most people have a 2G limit, so that would be a lot more without unlimited growth.

I had plex scanning disabled, until this month I could refresh sonarr/radarr 3 times each in a day and rescan plex a few times each a day and didn't hit the quota ban. Just like you....

Only times I hit the ban is when I had to rescan my entire library in (won't ever need to do that again). When I added jellyfin and emby, I increased the amount of file download entries by at least 16x, simply due to mediainfo calls that emby/jellyfin does every library scan, regardless if it already analyzed the file. (2 programs, each analyse call uses 4 downloads, scans twice a day. 2 x 4 x 2). No wonder people hit the quota easily when running more than just plex....
I eventually found this out and disabled the scheduled tasks. I have since stopped using both at this point and still running into the quota limit exceeded almost a week later.

Regardless of someones use case or settings, 4 download calls for consecutive reads / mediainfo calls, etc, should be addressed. That's the root cause of the quota ban.

It's definitely an issue that vfs-cache-mode=full does more than 1 files.get.call. It should trap additional reads for that file and not make additional calls...

So if simply rclone solves the issue where the same file is being downloaded at the same time, this would be a big win to reduce the number of downloads.

Update:

I just tested Google File Stream (I had it disabled on my windows pc for a while now)
It causes something similar, so it's not unique to rclone. Using File Explorer going to a folder with videos causes it to "sync" and rapidly cause a bunch of downloaded file entries. This is because of explorer thumbnails. I confirmed this by letting explorer finish getting the thumbnails, then exiting the folder and going back, no additional downloads. Each file in the folder has 8-15 entries in the audit log from GFS + explorer thumbnails.

I then tested playback using the default windows video player through GFS. Immediately 6 download entries. I then seeked around the file, and that caused 15+ more entries. This is why GFS causes the download quota bans... It would make sense that GFS read chunk would be small, since most people store docs... This proves that rclone is better as long as the read-chunk is higher and has growth.

Since my library is all scanned and I'm using the new radarr/sonarr options, I should have no random disk scans or file reads. So a high vfs-read-chunk-size would be more beneficial in this case, where I'm 'settled' and have my system working based on api notifications only.

I'll try vfs-read-chunk-size=512M or vfs-read-chunk-size=1G and see if the download calls in the audit log are reduced.

vfs-cache-mode=full is 4-6 downloads max. You can use the thumbnail / capture feature in a 3-4 hour maintenance window.

so vfs-cache-mode=off would be more than that and IMO I do not recommended this unless you are doing an initial scan. I really don't recommend it if you want to use the video thumbnail / chapter extraction features.

Let the data speak.

I have a 90GB file that i'm using for the test case, it's encrypted so the file is shown in the encrypted name here:

image

I simply just ran a loop to run mediainfo on the file. As expected, it creates 4 calls per file.

Here is the audit log.

https://docs.google.com/spreadsheets/d/1RGzkD3A2GZUlvlA_oT6ETRNhOHaA1m6JWYj-ybftyEI/edit?usp=sharing

I have 1534 download audits in that log. 1534 * 90GB is 138TB if every download counted as a whole file.

You can get my file gets jumped up while this was running but 0 errors anywhere.

That should validate that a download audit does not equal a full download of the file.

Here is my copying that same file now showing that it is still available to download:

image

Each HTTP request for a piece of data shows as a single API call and a single audit log for a download as that's expected. The lower the chunk size, the more API calls it makes and the more downloads you'd see in the audit log.

You are misunderstanding me. I'm not saying a download audit = a full download of the file. I'm saying 1 download entry counts as a download towards this particular quota limit. I'm saying it's a separate quota, it's not the unpublished, 10tb limit quota.

I'm saying 1 entry in the audit log counts as 1 download against the file download limit quota. It doesn't matter how much you download, 1 audit entry = 1 file download = 1 against this file-download quota limit.

I'm not sure yet of the max number of audit download entries allowed for a file within a 24hr period yet, but I'm thinking its more than 25 but less than 100 a day. Which is easily possible when using sonarr + radarr + jellyfin + emby at their defaults, which is to scan files 2x a day. It's also easily possible assuming you had to rescan from scratch plex + sonarr + radarr on a library over 100tb.

So far vfs-cache-mode=full so far has been working fine. 2 download calls during continuous playback. 4 calls if the file needed analyzed by plex first. This is better for maintaining low API usage than using the vfs-read-chunks.

As I been saying, it's about how many times the file is downloaded, not how much data was actually downloaded. 1 audit entry = 1 download of that file, regardless of amount downloaded.

The only downside I see with vfs-cache-mode=full is the slower startup time, if there was a 'stream' cache-mode that started playback while the rest of the file was downloaded, while keeping it a single dl call, that would be ideal for playback.

I haven't tried vfs-read-chunk-size=100G yet to see if that will basically do the same thing.

I just showed you via the data that 1 download entry in the audit log does nothing other than log a file request from the API.

My log I shares shows over 1500 audit downloads in less than an hour really proving there is not a limit on 'download' API hits. People run with 10M chunks making way more API calls per file than the defaults.

If .you play a 80GB 4k Movie, it would take a few minutes to start on a gigabit line with full cache mode.

For a data point, I just rescanned my Emby library at:

[felix@gemini ~]$ rclone about GD:
Used:    76.831T

and that took about 24 hours or so from start to finish all the same time running Plex without an issue along with Sonarr/Radarr/etc.

Here is the API console log when I scanned pushing over 2 file gets per second for quite a number of hours:

Number of files.get in that 24 or so period.

The only reason you'd get a Download Quota error message these days would be if you had a version before Summer 2018.

If you have some logs of what you are describing, please share them.

The only reason you'd get a Download Quota error message these days would be if you had a version before Summer 2018.

That's simply untrue, many people have reported this quota ban numerous times from what I found on the net and these forums.

This is a 30 day usage report, you are nowhere near my numbers. Note: I deleted and changed oauth keys in attempt to get unbanned. Which is why errors by API method shows the errors, but the counts underneath are 0, a side effect of doing that.

Examples please as these are mainly due to old versions for any download quota issues. It's non existent anymore.

I think my config is much more streamlined as my goal is reducing API hits.

Can you click on quotas and share this screen?

I've been using your config, your systemd files, 128M vfs-read-chunk and cache off, I only deviated from that today.

As you can see, nowhere close to the limits. As I've been saying there are more than just these quotas listed, and more than we know.

The increased usage you see is when I scanned in jellyfin and emby, which got be banned after a day, it finished the next day. Then I kept getting banned (because emby and jellyfin run mediainfo on scans no matter what and the default was every 12 hours).

I am not sure what you mean by 'banned'. Do you have any log files or examples of what that means?

Jellyfin being a fork of Emby should ffprobe files rather than mediainfo although, they basically dos the same thing.

You are right, ffprobe, not mediainfo, but yes same thing.

What I mean by "banned" is nothing but 403's on all drive.files.get for 24hr period usually, sometimes less. Something like: open file failed: googleapi: Error 403: The download quota for this file has been exceeded.

When this happens,plex will spin until h4 transcode error occurred (even when direct playing). In the mount log, it will be full of 403s and retry attempts.

This download quota mystery here. It might have various variables that go into this quota. We simply don't know, we are all making guesses when it comes to any unpublished limit, including the "10tb download limit" you can't find anywhere....

It doesn't seem to be just on a per file basis either. Once you hit the 403 wall, it doesn't matter what you try to play.

What's the size / file count of your GD?

If that error is occurring for you frequently, you should put your mount in debug and share the logs as we have something else going on.

drive-usage