Rclone Mount and Error 403 Download Quota exceeded

Hi,

i mount in linux, my team drive:
rclone mount MY_DRIVE:/ /home/upload --allow-other --vfs-cache-mode writes

Works, and i have 100 users with same ip, that read from this drive, different files, so request chunks for every files.

But after a while, drive stay mounted, but is impossible get any transfer, and i see that i have this error:


root@30011:~/.config/rclone# rclone version
rclone v1.50.1

  • os/arch: linux/amd64
  • go version: go1.13.4

I read that this issue was solved, but are thread old like 2y ago, so i didn't know if now isn't possible a workaround.

Thanks for reading and i hope that all informations, are completed, to have a correct shot of this issue.

That message says "the download quota has been exceeded" - that is google's way of telling you that you've downloaded that file too many times and you'll have to wait 24h before you can download it again I think.

Hi ncw!

thanks always for reading and support.

So if i limit request in 100 seconds, should be solved ?
Or there isn't any workaround to fix it ?
And is possible see the request limit for a file ?
limit are:10tb in download
750gb in upload
10.000 queries in 100 secs
and 1.000.000.000 query/day
but for a single file ?

No, you can only download a file so many times per day and there is a 10TB download and 750GB daily quota.

It's not the number of API hits.

Hi ani,

thanks to join this thread.

I mounted my drive, but traffic was 100MB/s for a couple of hours, then 403 error.
So i guess isn't possible that i face with 10tb/day, right?

There is something else

1 Like

^^^

It's not documented but if you have 100 users hitting something, that's probably it. You can always try support to see why you hit the error as well.

Oh i see

Thanks a lot for the support.

Is very strange anyway.
Because i didn't understand which quota is.

Thanks already

The only thing Google documents is a 10TB download and a 750GB upload quota per 24 hours. There is also no way to see where you are on these quotas or specifically when they reset.

Everything else is not documented so you have no way to see if you are hitting something along those lines.

I supposed about 100secs request, because i hitted 1.000 request in 100secs, and maybe when happens, i have 403 error?

Not, right?

I need to replicate it, tomorrow, then contact google support chat, to understand. :frowning:

No, that's not related to that error.

API quotas are documented and are separate 403s and common. That is Google telling you to slow down.

You can see those in your admin console as you posted and are not related to the download and upload quotas, which are not documented.

1 Like

Per account? Or per file?

It's not a documented thing so it's hard to tell. It's really trial and error on how many times you can share something or download the same file without it being too much.

Correct me if I'm wrong, I guess a way to fix that would be changing --vfs-cache-mode to full and also change the --vfs-cache-max-age to something greater than the standard 1h? Of course, this will bring two more issues for @RobertusIT:

  • The files will be available once downloaded in full, which is not a big deal for small files.
  • Local storage availability to support these files being written to disk.

Is there any other way for rclone to write to disk besides in full mode?

There is the cache backend...

Can you expand a bit on this? I thought that when using --vfs-cache-mode, only the full mode would give you read/writes on disk?

Please check out:

https://rclone.org/cache/

Thanks!
Yes, sorry, I'm also using it, and wasn't thinking about it this way. Question on this: can we use the cache's parameters (ie --cache-chunk-total-size) when mounting an encrypted remote? (Gdrive > cache > Crypt > < Mount).

Yes, that's documented in the link I shared above.

This part:

https://rclone.org/cache/#cache-and-crypt

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.