Receiving Error 403 downloadQuotaExceeded from gdrive

What is the problem you are having with rclone?

I think I somehow hit the download quota for my google drive, however there has been rarely any streams going on in the past 24 hours and only one library scan with a total of only 17k API hits from the Google APIs console, so I'm not sure why it happened in the first place.

What is your rclone version (output from rclone version)

v1.51.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Gentoo 2.6

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --cache-db-purge gcache: ~/mnt/gdrive --log-file=/mnt/mpathl/sneaksdota/rclone.log --log-level DEBUG &

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

Log file is around 6GB, I can pastebin lines from near where the Error 403 occurred.

EDIT: Yes it would definitely help if you can post the spesific error message. Just "403" is not exact as 403 is used for several error types on Gdrive (due to Google's design of the API - not rclone).

403 can mean a few different things on Gdrive.

Either it can be an API rejection (if you send more than 1000 API calls in 100seconds).
This type of 403 is normal to see a few of just randomly sometimes and rclone will automatically adjust to not hammer the API unnecessarily while using as much as it can to do the job.
This error will not persist for more than 100 seconds (unless you keep flooding the API somehow of course). Therefore it is usually not what user actually experience much problems with. The API quota is also fairly generous and it is rare to max it out.

Another type of 403 (with a slightly different message) will indicate you hit some sort of upload or download limit.

Download limit is 10TB/day.
Upload limit is 750GB/day (+ another 750GB/day for server-side transfers)

There is also a restriction that can trigger for downloads if a spesific file gets requested a whole bunch in a short time-span (this is primarily a safeguard against Gdrive shared files being used for large-scale mass-distribution). I don't have exact data on when it triggers, but I think we are talking about at least a few hundred requests within the span of a few hours.

From my experience the latter can sometimes be an issue - not really for actually using the files, but rather if software is badly configured or misbehaves such that it ends up requesting the same files over and over again.

Are you using Plex? Because some of the scanning it uses is known to be pretty aggressive (designed for local disks - not Cloud-storage). Some of them should definitely not be run automatically. Ask Animosity to clue you in on what appropriate scan settings look like - or see if you can find it in his big "recommended settings" thread. I believe that is mentioned somewhere in there. There are also of course many other Plex users on this forum you can ask.

I am not a daily user of Plex so I can't really tell you exactly what these settings are called ect.
Although if you posted me screenshots of the 2 relevant settings screens I could probably point out which scans are most likely to cause problems for you.

Apologies, I should've included more information regarding the error. The error from the logs is 'Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded'. There were two active streams when this happened, so perhaps something happened where it kept trying to play those files repeatedly?

Here's screenshots from the Google APIs Console for the past day and the 1 hour window from when I noticed the issue happening;
Past 24 Hours:


Past 1 Hour:

I'll also include screenshots from my Library & Scheduled Tasks settings, I know I turned off deep analysis awhile back following Animosity's advice so something like this wouldn't happen.
Library:


Scheduled Tasks:

Edit: I don't have a pastebin pro account so it seems I can't really paste the debug lines, but I did save a portion of the logfile around where the problem started happening and is 21MB instead of ~6GB if someone wanted to take a look at it.

I think this makes it pretty clear that we are talking about that third type of 403 - the one that triggers when spesific files get requested a ton in a relatively short time-frame. (this should reset either in a few hours or the next day at worst).

What that tells me is that no matter if it is Plex that is the root-cause or not - we are very likely looking misbehaving software. When I use the term "misbehaving" here I don't necessarily mean it's bugged or anything. It can just as easily be software that was designed for harddrives and aren't well optimized for network access, much less cloud-access. For example some programs open and close files very frequently to do small operations. A strategy like that works ok on a local disk, but it is a nightmare for a cloud interface due to latency and interface restrictions.

Is there a configuration screen missing here from Plex? I seem to feel like there is important stuff missing here that I remember from last time I look at them. Where is the deep analysis option for example? I don't see that in this screencaps. (and yes, that one is one you should run very sparingly, and manually, if at all).

Is the logfile a debug-log? We probably need a fairly low-level logging to see something useful from it in this case - and unfortunately it's unlikely to do more than just confirm what the problem-type is because rclone will have no way of telling which application requested something (those requests all come from the OS).

Try zipping/raring the file (text compresses down to just a few % of original size) and uploading that to some file-sharing website before posting a link. (be aware that such logs can leak information about your filenames, in case you are very privacy-focused)

In the Google API page - try clicking on the "Google Drive API" item in the list at the bottom.
This will give you a more detailed breakdown of requests. The relevant request type here would be "drive.files.get" and that at least lets you know how often files were requested in general over a certain timeframe.

For a more thorough breakdown than that (spesifying exact filenames) you need to be the admin of of Gsuite group and have access to the admin console. Here you can enable something called "Drive audit log" which will basically list ever request it gets. This should make it fairly clear what file(s) triggered this limit at least - but unfortunately it still doens't tell you why, or even what app the requests came from (but it might be obvious from the filenames themselves).

TLDR:

  • This is from extremely frequent access to a spesific file or files
  • It is from my experience almost certainly coming from a spesific app
  • In Plex, it is very likely connected to scan settings based on similar problems I have seen before (but I am no expert on Plex by any stretch).

I would start by doublechecking these relevant settings and talk to some more experienced Plex users who also have high technical expertise - there are loads of them here on the forum as Plex is a quite common use-case.

It's usually when you have an old version of rclone in the mix or are doing quite the number of file gets as a regular scan doesn't cause the issue.

I don't use the cache backend as it's always been a little buggy for me and never quite worked well for me (others have had good luck).

Are you sharing with anyone else? Is it a team drive?

Did you really mean Gentoo 2.6 as that seems really old. What does uname -a show?

If you look at the log, you may be able to grep and wc the contents and see if it was doing anything odd for that particular file. Assuming you don't have another rclone also in the mix somewhere to complicate it.

Just using a normal gsuite, no sharing with anyone, and I ran the command and it's showing gentoo 4.12.132

I uploaded a portion of the log file to Dropbox, from skimming it, it seems the issue happens when Plex was playing Narcos S01E10. But I'm not sure if it happened because the mount started to fail or something else but the API creds are only used for Plex and I don't mount my sonarr/radarr, and like I said, I have automatic scans disabled and extensive media analysis is turned off so unless that Narcos episode somehow consumed like 8TB+ of bandwidth there was barely any activity on Plex the past 24 hours. The only other thing I use is Drive File Stream to organize files I transfer into their folders to be scanned in by Plex, does by chance the constant syncing that DFS does go against my daily download bandwidth?

Are there any ways to safeguard against a file consuming huge amounts of bandwidth when it's trying to repeatedly play? Or do I just have to just wait the 24 hour lockout whenever this happens?

The odd part of that small log is you open the file 54 times when playing it, which is strange.

 grep "lags=O_RDONLY" rclone.log | grep Narcos | wc -l
      54

Since that's only a portion of the log, it might be related to hitting the file thousands of times if you had it going for a bit? Hard to tell as I don't use the cache backend and it's a snippet of the log.

While there is a 10TB download quota, I don't think you hit that either. There are other undocumented quotas on how many times you can download a file or hit a file, which is probably the case.

That is weird indeed, the person playing the file was one of my users, but 54 times seems excessive. I did try remounting my drive later on that night and I was personally unable to play any other files from my server, was receiving downloadQuotaExceeded errors from anything I tried.

Do you happen to know whether or not Google Drive File Stream counts toward download bandwidth when it's syncing files? There's been times where my DFS will continously act like it's trying to sync, and I hope that's doesn't act like I'm trying to download files from my gdrive even though nothing is even being downloaded locally.

If you wanted the full 6GB logfile to see if you could find some other cause for the download limit I could upload it to Dropbox for you

I agree on this - and I also think the general quota has a different message.
This looks like the "too many downloads of one file in a short period" 403.

I have never had this particular 403 myself, but I have heard from other people I have helped that it can apparently lock down ALL downloads until it resets - not just that file. This sounds weird to me, but sometimes Google makes weird rules...

Is there a chance that this file could have been played via some external media-player somehow? (I am not an expert on how Plex works). Because there is a well known issue with some mediaplayers (on at least some formats) that struggle to stream from a Gdrive because they use a read-method that opens the file - reads only a little bit - then closes it, and then reopens it again ect. ect. This not only leads to horribly stuttery playback but also massive amounts of requests for the same file. Even if only tiny sections are read each time it may easily register as a mass-leech if the Google servers see thousands of accesses to the same file in a short period.

Generally Plex should be capable of streaming it fine, but if this was a user outside your control that might have loaded the stream in some other mediaplayer or something (is that possible Animosity?) then it could easily explain the problem.

I doubt this is the problem. I don't think GDS syncs anything, at least by default. What it does is basically indexing and listing any changes. This is a trivial data-size, and besides listing data would not count against your quota - just require some API calls. I am of course making some assumptions here on how GDS works because I obviously don't know the internal logic, but I think the assumptions are fairly safe to make.

zip or rar it with max cmopression and it will probably end up something like 4-5% original size as text compresses absurdly well. That should make it a lot more managable to share. (but dear god - I'm not sure I want to try to skimsomething of that size...)

I don't think anyone would be playing any files other than through Plex, I only have it shared with close family & friends, but their knowledge of pretty much anything tech related is just installing the Plex app on w/e device they use, ie Firesticks/Fire TV Cube/Roku/Plex Windows App.

I just woke up and started seeing 403 downloadQuotaExceeded errors and also saw some 403 User Rate Limit Exceeded errors as well.

It appears it's not a full 24hr lockout this time, I'm currently able to play a file for the past 20-30 minutes or so, and when this happened last time I was unable to get anything started hours after I got the errors.

I just have no idea why this is happening, and if there's anything I could do to stop it from happening. Any help trying to diagnose this would be greatly appreciated.

After looking around it seems there were issues with some Google services this morning around 8-9am CST, probably as a result of more people working and having schools switch to online.

I'm guessing this could've caused the errors I got this morning, cuz when I first checked the API console it was showing 40-50% error rate from around 8:30am-12pm, but now when I'm looking at it those errors have since disappeared from the graphs.

This spesific error is from the API if it is getting too many requests too fast - but it definitely not critical. In fact it is normal to get some of these, especially if you just see them as retry 1/10 or 2/10. Rclone just uses these as "hey, slow down a bit" indicators and will adjust it's rate of requests automatically in most remotes. If you see way more retries than that then something is hammering the API bad and something may be wrong. You get 1000 requests pr 100 seconds, so this error will not persist very long at all - not unless something is going haywire in application software using rclone.

This is the same issue as before almost certainly (or otherwise it would have to be the general 10TB download quota, which seems very unlikely in your case).

Can you identify from the Plex, rclone and Google Gdrive metrics a spesific client that is causing the issue? Same one as before - or someone else? The thing is that there is really no way to stop an application from requesting rapid open/close/open on the same file. rclone just does what it is requested to do. It is an interface to files - not a gatekeeper/security system.Therefore the problem almost certainly needs to be fixed in the origination software for these requests.

I have been made aware (in another thread) that a Plex database for series-lookups has been down for a few days and causing errors in scans, but I don't think this could be related in any way to your issue. Just mentioning since I heard. I also think it was resolved a day or two ago.

I hate to reopen this issue again, but I just woke up to seeing that my gsuite appears to have hit the download limit again, but I still don't know why this is happening. Is there a possibility that my rclone remote/cache settings could be causing this? When I first set everything up I just used a guide made by bytesized to get everything running, but I don't know if the settings they used are the most optimal.

As I've said before, my server activity is not very heavy at all, at most there might be a few nights where there's 4-5 streams for a few hours but most of the day my server is unplayed or might have a stream active here and there so I don't believe I should be hitting anywhere near the 10TB download quota.

I typically upload around 400GB/day of TV/Movies, and I'll run 1-2 scans a day, here's the cache settings from my rclone.conf file as well, I'm on a 450GB SSD, which still has over 350GB free so idk if I should be increasing the size of my cache, as well as the age since my server is nearing 400TB of files.
[gcache]
type = cache
remote = gdrive:/Media
chunk_size = 10M
info_age = 1d
chunk_total_size = 10G

I zipped up the debug logs from the past ~24 hours, but I don't know whether or not that's going to help, but I hope I can figure out why this is happening because if this is happening multiple times a month there has to be an issue with my setup somewhere I'm assuming.

I highly doubt this is the reason but I've noticed anytime this has happened has been when I've moved alot of files around within my gdrive through Google File Stream, and there's plenty of times where instead of fully syncing on windows explorer, GFS will continuously say it's syncing files until the explorer window is closed, and there's been times that I've left my PC with the explorer window still open with it syncing continuously. I don't think that's an issue, but it's one of the things I've noticed whenever this happens.

As always, thanks in advance and I really appreciate any help toward figuring out my issue and what I'm doing wrong.

So after speaking with Gsuite support, the guy I spoke to told me that files moved/renamed through Google File Stream does count toward the daily limits, and recommended closing windows explorer or GFS completely when not using it conserve bandwidth.

I brought up that my GFS for whatever reason if I'm moving files through explorer it won't fully complete syncing everything, it'll show that 2-3 files still need to sync, but it will constantly do that nonstop, even repeating files that it already synced in the folder, only when I close the windows explorer window does GFS actually give me the "Everything is upto date" message. So I told him that I went to bed last night after moving files around and left explorer open, and when I woke up is when I saw that I was unable to access my files anymore and he told me that's most likely why I hit the download quota. So perhaps this is the answer to all of my download quota woes, I'll have to be more wary when using GFS to move new files around into their folders and just exiting out afterward.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.