Gsuite 403 downloadQuotaExceeded

Thanks for the information. Throwing some stuff out here: I just did a library scan in Plex with 250M of data being received on the machine. In Google’s audit log I see 97 occurrences of “downloaded an item” with a chunk size of 50M. Is that normal, or is that a symptom of the issue? Because that looks like Google thinks 4.8G is being downloaded, each file 5 or 6 times according to the audit log, while only 250M ends up on my machine.

Another observation: no ~/.cache/rclone directory is being created on my production machine, where it was on my test machine. Looking into whether that’s user error, but that could also explain re-downloads?

And finally: sorry if I sound stupid. Some of this is over my head, but I’m trying to help. Meanwhile, I keep my mount running with logging, so if a 403 downloadQuotaExceeded happens it should be visible in there.

I am going on 5 days (3 of which the server has been shut off) and I am still getting download quota errors when I try and download a file driect form GDrive. Am I missing something?

Yes, we would want the debug logs of before the error happened. If you can share the debug log of what you tested, that would be helpful too as I do not think you should see 4.8GB of data downloaded.

Where are you pulling that download log from on Google?

If you aren’t using rclone, something else has access as it normally resolves itself in 24 hours to be the best of my knowledge. You can always check with GDrive support too.

Here are the latest errors
GNU nano 2.7.4 File: rclone.log

“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:52 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:52 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:52 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:53 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:53 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:53 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:53 ERROR : /: Dir.Stat error: couldn’t list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files(id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2Ccre$
Response: {
“error” : “invalid_grant”,
“error_description” : “Token has been expired or revoked.”
}
2018/10/22 20:29:53 ERROR : Opening listener: listen tcp 127.0.0.1:5572: bind: address already in use
2018/10/22 20:29:53 NOTICE: Serving remote control on http://localhost:5572/

That’s not related to the 403s. That means your config/API key is incorrect and needs to be setup.

Got it. Thanks

Fixed that and now just have to wait and see what happens with access to Gdrive. I’ll post logs as soon as I am back to being able to read the files.

Before diving any deeper I’d like to get one thing straight, because I may be missing the point here :slight_smile:

When requesting a ~2.4GB file at a chunk size of 32MB, am I supposed to see 74 times “Peter downloaded an item” in the audit log (once for each chunk: 74 * 32MB = 2,368MB)? I’m suspecting this means Google thinks I downloaded the whole file 74 times, or isn’t that how it works (and/or, it is, but that isn’t an issue that leads to the download quota exceeded)?

Here’s the log as exported from Drive Audit (CSV), showing the 74 downloaded items: https://pastebin.com/piCUApSr This all happened when doing a single cp /mnt/media/file.mkv ~/ (/mnt/media is a mounted cache).

Ok, that audit log.

No, that’s perfectly fine. It’s downloading chunks of the file properly so you see a request for each chunk.

I can confirm I see the same thing in my logs and have never had a download Quota issue.

I experienced my first 403 last weekend. Unfortunately no logs, but I was in the progress to try and use the v2-gdrive flag. Not sure if it has anything to do with this. Removed the flag for the time being.

No excessive usage though… I find it rather curious why this is happening all of a sudden.

Finally got back access to google drive and using the admin reports on the Google admin page to see what is happening.

with that said, what is the easiest way to view live logs when you mount and restart the Plex sever?

Not sure the admin logs would show you anything helpful. You need to run the mount in debug log and reproduce the issue.

Should I add it to my scripts or just run a generic mount?
This is my current mount script

MOUNT

#!/bin/bash

RCLONECACHE=$HOME/.cache/rclone
RCLONEHOME=$HOME/.config/rclone
MOUNTTO=$HOME/media/Plex
LOGS=$HOME/tmp/logs
UPLOADS=$HOME/torrents/uploads

if mount|grep ${MOUNTTO}; then

echo “Gdrive mounted”

else

echo “Mounting Gdrive…”

mkdir -p ${MOUNTTO}

/bin/fusermount -uz ${MOUNTTO} > /dev/null 2>&1

$HOME/bin/rclone mount
–rc
–log-file ${LOGS}/rclone.log
–umask 022
–allow-other
–allow-non-empty
–fuse-flag sync_read
–tpslimit 10
–tpslimit-burst 10
–dir-cache-time=48h
–buffer-size=64M
–vfs-read-chunk-size=32M
–vfs-read-chunk-size-limit=2G
–vfs-cache-max-age=5m
–vfs-cache-mode=writes
–attr-timeout=1s
–cache-dir ${UPLOADS}
–config ${RCLONEHOME}/rclone.conf \

Up to you. Just add in the -vv or -log-level DEBUG

Thanks for that. Just so I am clear. I can add either of those, anywhere in that script? And then when I mount using the command to run that script, will it start to just show whats going on?

last stupid question (hopefully), when adding your own credentials into rclone, do you keep the
“.apps.googleusercontent.com” for Client ID?

Yes, you can add -vv anywhere. It will write it to the file specified in --log-file. You can open that with a text editor, or depending on your OS in a terminal with tail filename.log to follow along.

And yes, use the client ID as is, including the googleusercontent part :slight_smile:

Thanks Peter. I’ll give that a try and see whats doing.

I went ahead and added the DEBUG & added my own credentials and I THINK everything is working smoothly.

Here are my logs. Does anything stand out as a red flag?

https://pastebin.com/4eiaEcvc

The trick would be if the 403 quota errors pop up again, get the full log copied up so we can see what happens before the 403 happens.

Looking at that log, it looks normal to me. I don’t see any out of the ordinary.