Download Quota Exceeded & VFS

Context is important here as it matters. For my context, I was speaking in regards to vfs-cache-mode none, which is has no impact on uploads as it's not used. If you turn vfs-cache-mode writes, you see it is used.

I don't say this. @ncw does here ->

vfs-cache-mode full should not be used with Plex streaming as it's counter productive. It has to download the full file for each read. So when you play something in Plex, it would fully download it 3-4 times.

Apologies for putting words in your mouth.

I disagree that it's counter productive. It will use between 2-4 calls, vs the 10+ it would use if it was set to off. Assuming a file is 8gb:

128 + 256 + 512 + 1024 + 2048 + 4096 = 2-4 more file downloads needed for playback.

It would be more if the file was bigger. It appears that with vfs-cache-mode=full it's between 2-4 downloads total. So to me that keeps running into this download ban, this is better.

To me it seems the 4 separate downloads is a bug, not a feature. It should just be 1 call with consecutive reads.

use full:

  • when library is already scanned + plex auto scanning is off + arr disk scanning >= manual only + arr connects to plex to tell it to scan.
  • when using the scheduled tasks for thumbnails. If you have this on, you must set vfs-cache-mode=full I only recommend a max 4 hour window. This option is only an issue for stuff uploaded, if you turn this one day 1 so it uses a local file for thumbnails, it's fine.

use writes:

  • when using a program like bazarr that needs it.

use off:

  • when you need to do an initial scan or disk refresh in Arr.

I started seeing downloadquotaexceeded when I decided to setup and scan emby + jellyfin at the same time. Even after the inital scan it kept happening.

I then discovered that emby/jellyfin will rescan every 12 hours, even if I turned off auto scan for changes in the library settings. Go to Scheduled Tasks > Library Scan to delete the 12 hour tasks.

Even after I did this, I still hit download quotas, perhaps it's more than 24 hours? maybe there's a weekly quota too or something...

I stopped jellyfin and emby, wasn't using them anyway. I did this after I found out it does mediainfo calls even on files it already analyzed, it does this every scan. Hence why i was getting banned.

It's been 3-4 days since I stopped emby and jellyfin and still got a downloadquota exceeded for about 6 hours last night and the day before.

Regardless of this, vfs needs to reduce the file download api calls, there must be a better way. I have been getting this ban occasionally even before adding jellyfin/emby to the mix, perhaps on the days where I did few full scans (before the new disk scan option) in sonarr/radarr.

Here is the log showing it's just a normal write:

[felix@gemini ~]$ rclone mount gcrypt: /Test -vv --drive-chunk-size 128M
2019/08/25 14:43:56 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "mount" "gcrypt:" "/Test" "-vv" "--drive-chunk-size" "128M"]
2019/08/25 14:43:56 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/25 14:43:57 DEBUG : Encrypted drive 'gcrypt:': Mounting on "/Test"
2019/08/25 14:43:57 DEBUG : Adding path "vfs/forget" to remote control registry
2019/08/25 14:43:57 DEBUG : Adding path "vfs/refresh" to remote control registry
2019/08/25 14:43:57 DEBUG : Adding path "vfs/poll-interval" to remote control registry
2019/08/25 14:43:57 DEBUG : : Root:
2019/08/25 14:43:57 DEBUG : : >Root: node=/, err=<nil>
2019/08/25 14:44:33 DEBUG : /: Attr:
2019/08/25 14:44:33 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxr-x, err=<nil>
2019/08/25 14:44:33 DEBUG : /: Lookup: name="file.txt"
2019/08/25 14:44:34 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2019/08/25 14:44:34 DEBUG : /: Lookup: name="file.txt"
2019/08/25 14:44:34 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2019/08/25 14:44:34 DEBUG : /: Create: name="file.txt"
2019/08/25 14:44:34 DEBUG : file.txt: Open: flags=O_WRONLY|O_CREATE|O_EXCL
2019/08/25 14:44:34 DEBUG : file.txt: >Open: fd=file.txt (w), err=<nil>
2019/08/25 14:44:34 DEBUG : /: >Create: node=file.txt, handle=&{file.txt (w)}, err=<nil>
2019/08/25 14:44:34 DEBUG : file.txt: Attr:
2019/08/25 14:44:34 DEBUG : file.txt: >Attr: a=valid=1s ino=0 size=0 mode=-rw-rw-r--, err=<nil>
2019/08/25 14:44:34 DEBUG : &{file.txt (w)}: Write: len=131072, offset=0
2019/08/25 14:44:34 DEBUG : &{file.txt (w)}: >Write: written=131072, err=<nil>

Here is the one with drive chunk size:

rclone mount gcrypt: /Test -vv --drive-chunk-size 128M --vfs-cache-mode writes
2019/08/25 14:52:25 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "mount" "gcrypt:" "/Test" "-vv" "--drive-chunk-size" "128M" "--vfs-cache-mode" "writes"]
2019/08/25 14:52:25 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/25 14:52:26 DEBUG : Encrypted drive 'gcrypt:': Mounting on "/Test"
2019/08/25 14:52:26 DEBUG : vfs cache root is "/home/felix/.cache/rclone/vfs/gcrypt"
2019/08/25 14:52:26 DEBUG : Adding path "vfs/forget" to remote control registry
2019/08/25 14:52:26 DEBUG : Adding path "vfs/refresh" to remote control registry
2019/08/25 14:52:26 DEBUG : Adding path "vfs/poll-interval" to remote control registry
2019/08/25 14:52:26 DEBUG : : Root:
2019/08/25 14:52:26 DEBUG : : >Root: node=/, err=<nil>
2019/08/25 14:53:26 DEBUG : Google drive root 'media': Checking for changes on remote
2019/08/25 14:54:26 DEBUG : Google drive root 'media': Checking for changes on remote

2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: >Write: written=131072, err=<nil>
2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: Write: len=131072, offset=104726528
2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: >Write: written=131072, err=<nil>
2019/08/25 14:54:31 DEBUG : &{file.txt (rw)}: Flush:
2019/08/25 14:54:31 DEBUG : file.txt(0xc0000a87e0): close:
2019/08/25 14:54:31 DEBUG : file.txt: Couldn't find file - need to transfer
2019/08/25 14:54:32 DEBUG : hvbduq4fcbsohe2qf4ea08qf04: Sending chunk 0 length 104883232
2019/08/25 14:54:37 INFO  : file.txt: Copied (new)
2019/08/25 14:54:37 DEBUG : file.txt: transferred to remote
2019/08/25 14:54:37 DEBUG : file.txt(0xc0000a87e0): >close: err=<nil>
2019/08/25 14:54:37 DEBUG : &{file.txt (rw)}: >Flush: err=<nil>
2019/08/25 14:54:37 DEBUG : &{file.txt (rw)}: Release:
2019/08/25 14:54:37 DEBUG : file.txt(0xc0000a87e0): RWFileHandle.Release nothing to do
2019/08/25 14:54:37 DEBUG : &{file.txt (rw)}: >Release: err=<nil>

You can see it uploaded it via one chunk on the second command respecting the drive-chunk-size while in the first, it ignored it.

If you are getting download exceeded, you have an old version of rclone in there and something older than mid 2018 when vfs chunked reading was introduced.

I think there is confusion in API calls vs download quota? API calls are handled via the quotas page and you can get 10 per second by default. If you exceed that, it will backoff and no harm / no foul really.

If you hit download or upload quota, you get a similar 403 and can't do something. Upload is 750Gb per day. Download is 10TB per day.

Back pre 2018, there was no chunked downloading so you'd quickly hit the 10TB quota because each 'touch' of a file caused it to look like you downloaded it fully. Now, you only get tagged for the request you make.

If you have vfs-cache-mode full and went to a play 70GB file, it has to download 70GB before it can start playing in Plex. That's what I was saying is counter productive to playing a file in Plex with full.

The cache backend is indeed slower.

VFS caching is actually useful for most people. Just last night 2 people were playing the same video at the same time. Sometimes I see the same video played, at different times, by multiple people in a week. I don't have a big disk, but I don't mind having a cache that holds files for a week. Additionally having the file cached would needed if you turned on video thumbnails. The number of file downloads caused by video thumbnails is why it's not recommended, but if the file is cached locally, that isn't an issue anymore.

For me, it's not about saving bandwidth, it's about saving files.get quota, which seems to have been changed to be lower (at least for me). I haven't hit the 10tb limit. So I can afford a bit of waste on that quota. Perhaps google sees 1 file download = full size of file, even if only 128M (first chunk) is requested? If that's the case, then a change to request the whole file instead of the chunk size, would resolve the repeated download calls. This downloadquotaexceeded quota could just be a 10tb quota google thinks you downloaded 10tb worth of files, but really it was just a bunch of 128M chunks?

Ideally to solve this issue:
VFS needs to read in chunks and download only the data read, but to google it's just 1 download call.

An API call to drive.files.get != a full download of the file. You will see in the GSuite audit a log a file 'download' for each 'get' but that does not count against your download as you are only getting a chunk of the file.

Chunked reading with the VFS was a big thing for rclone to get for GD back in mid 2018 as that's why the cache backend was good prior because it had chunked reading much like plexdrive and the other providers out there.

If you want to read up on it, you can check out this thread as it has a lot of detail on chunked reading:

An API call to drive.files.get != a full download of the file. You will see in the GSuite audit a log a file 'download' for each 'get' but that does not count against your download as you are only getting a chunk of the file.

I understand that, but it does count against your download. Your assumption is incorrect in my findings, you may only be downloading 128M or 1G out of 50G, but it's still counted as 1 file download. Every XXX has downloaded XXXX file entry in the audit log counts against the filedownloadquota limit. It doesn't matter if it was 128M chunk or a 4G chunk, each entry = 1 download towards the quota.

There's a 10 TB limit, but there is also a drive.files.get call limit as well, and since vfs-read-chunk-size calls drive.files.get multiple times when a file is read, it does cause downloadquotaexceeded. For non plex users, this is a hard limit to reach for the most part. It increasingly becomes more common as your library grows, especially if you have sonarr/radarr doing it's default 12 hour scans with analyse media on. This is because sonarr/radarr call mediainfo on every file every 12 hours unless you change the settings. Additionally, plex calls mediainfo on files it hasn't analyzed in a while and at minimum is 4 api calls for just a mediainfo call. During playback, for a 60gb file (not uncommon in my lib) at 128MB read chunk with unlimited growth, that's (assuming the user plays once and doesn't stop...) about 10 downloads (as google sees it). I see most people have a 2G limit, so that would be a lot more without unlimited growth.

I had plex scanning disabled, until this month I could refresh sonarr/radarr 3 times each in a day and rescan plex a few times each a day and didn't hit the quota ban. Just like you....

Only times I hit the ban is when I had to rescan my entire library in (won't ever need to do that again). When I added jellyfin and emby, I increased the amount of file download entries by at least 16x, simply due to mediainfo calls that emby/jellyfin does every library scan, regardless if it already analyzed the file. (2 programs, each analyse call uses 4 downloads, scans twice a day. 2 x 4 x 2). No wonder people hit the quota easily when running more than just plex....
I eventually found this out and disabled the scheduled tasks. I have since stopped using both at this point and still running into the quota limit exceeded almost a week later.

Regardless of someones use case or settings, 4 download calls for consecutive reads / mediainfo calls, etc, should be addressed. That's the root cause of the quota ban.

It's definitely an issue that vfs-cache-mode=full does more than 1 It should trap additional reads for that file and not make additional calls...

So if simply rclone solves the issue where the same file is being downloaded at the same time, this would be a big win to reduce the number of downloads.


I just tested Google File Stream (I had it disabled on my windows pc for a while now)
It causes something similar, so it's not unique to rclone. Using File Explorer going to a folder with videos causes it to "sync" and rapidly cause a bunch of downloaded file entries. This is because of explorer thumbnails. I confirmed this by letting explorer finish getting the thumbnails, then exiting the folder and going back, no additional downloads. Each file in the folder has 8-15 entries in the audit log from GFS + explorer thumbnails.

I then tested playback using the default windows video player through GFS. Immediately 6 download entries. I then seeked around the file, and that caused 15+ more entries. This is why GFS causes the download quota bans... It would make sense that GFS read chunk would be small, since most people store docs... This proves that rclone is better as long as the read-chunk is higher and has growth.

Since my library is all scanned and I'm using the new radarr/sonarr options, I should have no random disk scans or file reads. So a high vfs-read-chunk-size would be more beneficial in this case, where I'm 'settled' and have my system working based on api notifications only.

I'll try vfs-read-chunk-size=512M or vfs-read-chunk-size=1G and see if the download calls in the audit log are reduced.

vfs-cache-mode=full is 4-6 downloads max. You can use the thumbnail / capture feature in a 3-4 hour maintenance window.

so vfs-cache-mode=off would be more than that and IMO I do not recommended this unless you are doing an initial scan. I really don't recommend it if you want to use the video thumbnail / chapter extraction features.

Let the data speak.

I have a 90GB file that i'm using for the test case, it's encrypted so the file is shown in the encrypted name here:


I simply just ran a loop to run mediainfo on the file. As expected, it creates 4 calls per file.

Here is the audit log.

I have 1534 download audits in that log. 1534 * 90GB is 138TB if every download counted as a whole file.

You can get my file gets jumped up while this was running but 0 errors anywhere.

That should validate that a download audit does not equal a full download of the file.

Here is my copying that same file now showing that it is still available to download:


Each HTTP request for a piece of data shows as a single API call and a single audit log for a download as that's expected. The lower the chunk size, the more API calls it makes and the more downloads you'd see in the audit log.

You are misunderstanding me. I'm not saying a download audit = a full download of the file. I'm saying 1 download entry counts as a download towards this particular quota limit. I'm saying it's a separate quota, it's not the unpublished, 10tb limit quota.

I'm saying 1 entry in the audit log counts as 1 download against the file download limit quota. It doesn't matter how much you download, 1 audit entry = 1 file download = 1 against this file-download quota limit.

I'm not sure yet of the max number of audit download entries allowed for a file within a 24hr period yet, but I'm thinking its more than 25 but less than 100 a day. Which is easily possible when using sonarr + radarr + jellyfin + emby at their defaults, which is to scan files 2x a day. It's also easily possible assuming you had to rescan from scratch plex + sonarr + radarr on a library over 100tb.

So far vfs-cache-mode=full so far has been working fine. 2 download calls during continuous playback. 4 calls if the file needed analyzed by plex first. This is better for maintaining low API usage than using the vfs-read-chunks.

As I been saying, it's about how many times the file is downloaded, not how much data was actually downloaded. 1 audit entry = 1 download of that file, regardless of amount downloaded.

The only downside I see with vfs-cache-mode=full is the slower startup time, if there was a 'stream' cache-mode that started playback while the rest of the file was downloaded, while keeping it a single dl call, that would be ideal for playback.

I haven't tried vfs-read-chunk-size=100G yet to see if that will basically do the same thing.

I just showed you via the data that 1 download entry in the audit log does nothing other than log a file request from the API.

My log I shares shows over 1500 audit downloads in less than an hour really proving there is not a limit on 'download' API hits. People run with 10M chunks making way more API calls per file than the defaults.

If .you play a 80GB 4k Movie, it would take a few minutes to start on a gigabit line with full cache mode.

For a data point, I just rescanned my Emby library at:

[felix@gemini ~]$ rclone about GD:
Used:    76.831T

and that took about 24 hours or so from start to finish all the same time running Plex without an issue along with Sonarr/Radarr/etc.

Here is the API console log when I scanned pushing over 2 file gets per second for quite a number of hours:

Number of files.get in that 24 or so period.

The only reason you'd get a Download Quota error message these days would be if you had a version before Summer 2018.

If you have some logs of what you are describing, please share them.

The only reason you'd get a Download Quota error message these days would be if you had a version before Summer 2018.

That's simply untrue, many people have reported this quota ban numerous times from what I found on the net and these forums.

This is a 30 day usage report, you are nowhere near my numbers. Note: I deleted and changed oauth keys in attempt to get unbanned. Which is why errors by API method shows the errors, but the counts underneath are 0, a side effect of doing that.

Examples please as these are mainly due to old versions for any download quota issues. It's non existent anymore.

I think my config is much more streamlined as my goal is reducing API hits.

Can you click on quotas and share this screen?

I've been using your config, your systemd files, 128M vfs-read-chunk and cache off, I only deviated from that today.

As you can see, nowhere close to the limits. As I've been saying there are more than just these quotas listed, and more than we know.

The increased usage you see is when I scanned in jellyfin and emby, which got be banned after a day, it finished the next day. Then I kept getting banned (because emby and jellyfin run mediainfo on scans no matter what and the default was every 12 hours).

I am not sure what you mean by 'banned'. Do you have any log files or examples of what that means?

Jellyfin being a fork of Emby should ffprobe files rather than mediainfo although, they basically dos the same thing.

You are right, ffprobe, not mediainfo, but yes same thing.

What I mean by "banned" is nothing but 403's on all drive.files.get for 24hr period usually, sometimes less. Something like: open file failed: googleapi: Error 403: The download quota for this file has been exceeded.

When this happens,plex will spin until h4 transcode error occurred (even when direct playing). In the mount log, it will be full of 403s and retry attempts.

This download quota mystery here. It might have various variables that go into this quota. We simply don't know, we are all making guesses when it comes to any unpublished limit, including the "10tb download limit" you can't find anywhere....

It doesn't seem to be just on a per file basis either. Once you hit the 403 wall, it doesn't matter what you try to play.

What's the size / file count of your GD?

If that error is occurring for you frequently, you should put your mount in debug and share the logs as we have something else going on.


And your setup is Plex/Emby/Jellyfin all running at the same time? I can speak from experience that Plex and Emby should be very light on API hits once your library is scanned.

If you really want to reduce API hits (which I really don't think is your issue), you can increase 128->256 or 512 as that's basically a HTTP range request for a file and doesn't mean you'd download 256M at a time. It basically wastes a little bandwidth to reduce the number of API hits.

The download quota exceeded though has my gut telling me something else is going on but we'd need some logs in debug to figure that out.

What's your 1 day file / get? I can run some mediainfo loops and get those numbers pretty quick if you feel that's a possible issue and rule it out.

There isn't a need for this. If a file is already analzyed, it just checks the time stamp resulting is no analysis/mediainfo/ffprobe on the file. Same for both Plex and Emby (I don't use Jellyfin so I can't comment on that.

With analysis off, the same as above, nothing but a directory list happens.

1 Like