Download quota exceed - even with a new unique id?

hi,
as your problem could have a different cause.
it would be helpful to post the same information as the OP.

make sure you have make these changes
GitHub - animosity22/homescripts: My Scripts for Plex / Emby with Google Drive and rclone

i always disable all scheduled scans. i do that manually.
when i have new media in the mount, i run
rclone vfs/refresh, which will update the dir cache.
then i run a manual scan with jellyfin, which will go quickly.

Ok, I have no format so excuse me:

What is your rclone version (output from rclone version)

/rclone version
rclone v1.55.0-beta.5152.b2b5b7598
os/arch: linux/amd64
go version: go1.16rc1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu Desktop 20.04.LTS

Which cloud storage system are you using? (eg Google Drive)

Googledrive Workspace

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I guess any when the problem starts, even a reverse copy IE:

./rclone copy "GCrypt:Movies/The Legend of Hei (2019)/The Legend of Hei 2019 Unknown.mp4" C:\Users\MYACCOUNT\Desktop\Archivio -P
2021-04-11 01:20:38 ERROR : The Legend of Hei 2019 Unknown.mp4: Failed to copy: multipart copy: failed to open source: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

As I said even a browser manual download won't work.

The rclone config contents with secrets removed.

[PS]
type = drive
client_id = XXXXXXXXXXXXXX.apps.googleusercontent.com
client_secret = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
scope = drive
token = {"access_token":XXXXXXXXXXXXXXXXXXXXXXXXX","token_type":"Bearer","refresh_token":"XXXXXXX","expiry":"2021-04-11T04:37:23.223534+02:00"}
team_drive = XXXXXXXXXXXXXXXXXX
root_folder_id = 
[GCrypt]
type = crypt
remote = PS:mediaserver/crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxx

A log from the command with the -vv flag

Now it works, when the problems kicks in again I'll edit/reply. Btw as far as I remember not a lot more infos.

I'm using the animosity configs with a little edit then and now to debug the problem, now it is:

rclone mount GCrypt: /GD \
        --allow-non-empty \
        --allow-other \
        --async-read=false \
        --poll-interval 15s \
        --dir-cache-time 1000h \
        --cache-dir=/cache \
        --vfs-cache-mode full \
        --vfs-cache-max-size 20G \
        --vfs-cache-max-age 336h \
        --tpslimit 5 \
        --bwlimit-file 16M \
        --stats 10m \
        --log-file $logfile \
        --stats-log-level NOTICE \
        --umask 000 \
        --user-agent nixapp

And I am using jellyfin not plex, but I used to have the same problem with plex.

quota expire after a period of time so perhaps that is the reason the problem went away

that config does not use -allow-non-empty

Yes, I Know. Yet I am trying any possible configs/limit or things, even enough cache to download locally the wole movie, but I can only think that each movie-fragment (even more if it is a multi-thread download) counts as a whole download.

Maybe you need to ask RClone developers to give you some history/summary of the api call distribution.

The issue here appears to be rate limits that Google, and many cloud vendors, impose on certain behavior. Maybe you need to ask Google for some information about what is behind the message to help out.

I have had paid Google accounts and have hit these a few times with mail. Eventually someone changed something (Microsoft or Google) and haven’t seen mine for a long time. Google rate quotas reset every say if I remember right. Sometimes all you need to do is wait until tomorrow:-)

So yes maybe there is an rclone change that does more api calls. But without knowing the call, hence the request from Google, how would development figure it out? In fact, once you know the issue you can even look at the source code to give you ideas for change where ever it might need to be done.

My $.02

You don't hit API quotas with rclone.

You hit download or upload limits. It's a very important distinction.

Thanks -/ good point. I was thinking about an RClone api question and that accidentally slipped in here.

I already had this problem in fact in my case it is that I had too much data to synchronize. so i used --tps limit 1 for rebuilding my libraries.

I had a similar problem to OP. I was just copying a folder (7.5 TB) from google drive to unraid using “rclone copy gdrive:/path mnt/user/downloads/drive -P” which worked for about 700GB, but then got the same Error 403: The download quota for this file has been exceeded downloadQuotaExceeded.

If this isn’t an API limit issue, should I be using a different flag with the copy command? 7.5TB should be under the daily download limit to my understanding.

Should also add: v1.55.0, amd64

Exactly I also use --bwlimit 4M and --tps limit 1 during the complete reconstruction of the libraries, at this speed we do not exceed 700 GB per day!

1 Like

Oh is the limit 750 GB per day? If so I’ll use the bwlimit tag as suggested :blush:

Edit: also, to my understanding I should be able to use the copy command again (using the tag) and not have to worry about duplicates, right? It’s too bad, I was enjoying these transfer speeds lol

Currently I encrypt all the contents of my drive using: rclone copy --create-empty-src-dirs --bwlimit 4M /mnt/encryption/Series Gcrypt:/Series

encryption is my workspace and Gcrypt is a shared disk that I created for myself from my workspace. so everything is done on the fly without having to worry about exceeding the limit per day :slight_smile:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.