[gDrive] download quota exceeded for this file, so you can't download it at this time

Hi,

i use rclone-v1.34-38 to mount my gdrive to a dedicated server.

today I ran into the issue that for some files i recieve the following:

cannot open ‘xxxx.mkv’ for reading: Input/output error.

When checking the webinterfac of gdrive for the specific file and trying to manually download i get:

download quota exceeded for this file, so you can’t download it at this time

So I guess rclone can’t handle this specific type of error right now?

Kind regards

Also: the --dump-headers option of rclone only revealed “403 Forbidden” without stating any further information. Would it be possible to get this information like “user / dailiy limit” integrated into rclone?

google have quotas for users, but I don’t think they are visible to the users in any way. The only way you can see them is by getting a 403 error.

That is a shame… Might be worth trying dump-bodies to see if there is a JSON error message with more info in it.

I don’t think google make it public :frowning:

Likely related:
https://forum.rclone.org/t/google-drive-quota-reached-seemingly-linked-to-folder-scanning

I am not sure Google has a bandwidth limit per day which is exceedable, I have transferred over 10 terabytes in under 24 hours before. Since moving to a mount which does not use queries I have not reached any kind of a quota issue, except for when I used rclone to sync my GDrive twice in a day, which transferred little data, but did many checks.

what do you mean by that?

I currently upload data in small portions throughout the day, so possibliy trying to reduce the total number of “new rclone upload sessions” helps with the Google limit?

I believe queries are checking what is available in a given directory. Checking the contents of one folder may be considered one query. It doesn’t use queries because it caches the entire folder structure instead of constantly asking Google what is in folder X to retrieve a list of files. One thing that rclone does that can use a lot of queries when uploading is check for duplicates.

If you’re receiving a quota limit simply by uploading throughout the day, and you don’t heavily access the mount, it’s likely that rclone checking for duplicates when you upload is the cause. If you upload a single file (or anything) into a folder with many subdirectories, I believe rclone checks each folder to make sure you’re not uploading any duplicate files. If I am correct about queries being the cause of Google’s download limits, you should benefit by adding --no-traverse when uploading files. This will stop rclone from checking every subdirectory to make sure you’re not uploading a file which already exists.

1 Like

Last week I uploaded a audio file of one popular video available in YouTube. I added its link in Google drive and shared on YouTube. But Many people downloaded the file and I got an error. It shows “download quota exceeded for this file, so you can’t download it at this time”.

I found some third party file hosting services can overcome this issue. One such service is Kiuna. You can see how to bypass this Google Drive Error in this article.

Credit:- techiestechguide

i am getting the same error now.

this is the first time i get the error.

but…today i have make NO downloads or uploads bigger than 750GB in total. i only upload round about 250GB and download 50GB.

after what time i can download? upload still works actualy

You got a 24 ban. What’s your config?

first: i have tried a few seconds ago and now it works! the BAN is over, but that was not 24h?!

my config is:
rclone mount -vv --vfs-cache-mode writes --dir-cache-time 240m0s --allow-non-empty --allow-other --ignore-times --transfers 200 --checkers 200 remote: /media/mount

but i changed 1 day ago the transfers and checkers from 60 to 200. is that responsible for the ban? or the --dr-cache-time?

Are you using the cache as well?

tranfers/checkers does nothing with the mount.

No i only use dir-cache-time so Cache only the folder for faster browsing. But i think this do nothing. Can i minimize the API calls?

If you use the mount and GD, you need to use the cache or you’ll get banned. dir-cache works with the cache so that’ll do nothing if it isn’t setup.

OK what are the best settings to use Cache with a small HDD and 1GB RAM?

is that a dir-cache-time for 240 months? if so, i'm considering a high dir-cache-time myself since my files don't change as well.

A high cache info time works well to make the drive appear more snappy when navigating and also cuts down on a lot of time when syncing since it doesn't have to make a lost of list requests to traverse everything.

That said, be very aware of what the potential downfalls are.
Most importantly, very long info expiry timers should only be considered safe if ALL changes to it happens though this cache point. Otherwise, file info can become out of date and incorrectly displayed. In best case you don't see file changes done outside your cache. At worst files can get corrupted because it assumes a wrong size when it works with them.

I use long cache expiry timers myself, but only after thoroughly understanding them. Even other rclone instances on the same machine must be considered here...

Common pitfalls to be aware of:

  • Confusing VFS cache and backend cache parameters. Read up on which apply to which and know that these do not coordinate with eachother. Also consider their order if you use the cache-backend.
  • Setting a VFS --dir-cache-time higher than --cache-info-age (this must be avoided). If you have a high --cache-info-age (assuming that you use the cache backend that is) you can leave the VFS timer to default.
  • Forgetting that other rclone instances, such as daily syncs, also have to go through the cache to avoid your cache info becoming incorrect.

Please do bump year old posts.