Hitting Google API download limit past two days

What is the problem you are having with rclone?

I'd like to preface that my knowledge for rclone and linux is pretty basic, everything I've done so far to set my server up has been from following guides and forums for solutions to problems. The past couple nights I've received download API bans from Google with minimal use and I'm not sure what's causing it to happen and how to rectify it.

What is your rclone version (output from rclone version)


Which OS you are using and how many bits (eg Windows 7, 64 bit)

Personal: Windows 10, 64-bit
VPS: Linux, Gentoo (I think?)

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount gcache: ~/mnt/gdrive &

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

2020/01/18 23:56:59 DEBUG : rclone: Version "v1.49.1" starting with parameters ["rclone" "-vv" "mount" "gcache:" "/home/sneaksdota/mnt/gdrive"]
2020/01/18 23:56:59 DEBUG : Using config file from "/home/sneaksdota/.config/rclone/rclone.conf"
2020/01/18 23:57:00 DEBUG : gcache: wrapped gdrive:Media at root
2020/01/18 23:57:00 INFO : gcache: Cache DB path: /home/sneaksdota/.cache/rclone/cache-backend/gcache.db
2020/01/18 23:57:00 INFO : gcache: Cache chunk path: /home/sneaksdota/.cache/rclone/cache-backend/gcache
2020/01/18 23:57:00 INFO : gcache: Chunk Memory: true
2020/01/18 23:57:00 INFO : gcache: Chunk Size: 10M
2020/01/18 23:57:00 INFO : gcache: Chunk Total Size: 10G
2020/01/18 23:57:00 INFO : gcache: Chunk Clean Interval: 1m0s
2020/01/18 23:57:00 INFO : gcache: Workers: 4
2020/01/18 23:57:00 INFO : gcache: File Age: 1d
2020/01/18 23:57:00 DEBUG : Adding path "cache/expire" to remote control registry
2020/01/18 23:57:00 DEBUG : Adding path "cache/stats" to remote control registry
2020/01/18 23:57:00 DEBUG : Adding path "cache/fetch" to remote control registry
2020/01/18 23:57:00 DEBUG : Cache remote gcache:: Mounting on "/home/sneaksdota/mnt/gdrive"
2020/01/18 23:57:00 DEBUG : Cache remote gcache:: subscribing to ChangeNotify
2020/01/18 23:57:00 DEBUG : Adding path "vfs/forget" to remote control registry
2020/01/18 23:57:00 DEBUG : Adding path "vfs/refresh" to remote control registry
2020/01/18 23:57:00 DEBUG : Adding path "vfs/poll-interval" to remote control registry
2020/01/18 23:57:00 DEBUG : : Root:
2020/01/18 23:57:00 DEBUG : : >Root: node=/, err=
2020/01/18 23:58:00 DEBUG : Cache remote gcache:: starting cleanup
2020/01/18 23:58:00 DEBUG : Google drive root 'Media': Checking for changes on remote

I really appreciate any and all help, I'll try my best providing whatever information is needed, etc. Thanks in advance!

What is the symptom of this? Is there are error message associated with it?

If any files are played by Plex I just see something like, "403 downloadQuotaexceeded" being spammed in putty and nothing plays, but I havent run any scans and there was only 1 stream active at the time when the jump in error % occured so I'm not sure what's happening

I've been using that mount command for the past 11 months with no issues but if trying something different with a bunch of flags might work I'd be willing to try that, I think the ban ended a couple hours ago so I have another shot. All I did to get it banned the second time was refresh the metadata on a show with 5 seasons, with no active streams, then I started receiving the messages again.

Just mounted the drive with "rclone mount gcache: ~/mnt/gdrive --cache-db-purge --buffer-size 64M --dir-cache-time 72h --drive-chunk-size 16M --timeout 1h --vfs-cache-mode minimal --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G &" that someone recommended to me on reddit.

Files are able to be played so far, but I'm not going to try scanning any libraries for fear of just getting API banned again.

Edit: So after starting up the mount and monitoring the Google API's, with one stream active it's been going as high as 6/s which I don't recall it ever being that high unless I was doing a full library scan, am I doing something wrong? I also don't know what this new orange & blue "compute" bar that's showing 0 for traffic, and showing 100% errors.

You have a pretty strange mount command going on.

Are you using the cache backend?

The buffer size isn't needed with the cache backend as it uses it's own memory management so setting that is wasteful.

Anyone else sharing your mount/API stuff? It's very strange you'd suddenly get a download issue without changing anything. Is there an old version running on a machine somewhere?

Please excuse my ignorance but what do you mean by the cache backend?

No, I've always been the only person using any of my stuff, and Whatbox specifically says not to use "allow_other" so I've never included it in my mount command;

Many guides on the Internet have said to use the "allow_other" parameter, however you should not do this. This is only intended for when Plex runs on its own user account, and on a shared system it would mean other users being able to access your mounted data. We have this module disabled and you will receive errors if trying to use it.

So unless someone somehow got access to my API, I'm at a loss to why there's been so many calls when there's barely any activity on my server, would it be beneficial to delete the API, and recreate the gdrive remote with a new set of creds?

Here's an updated screenshot of the API, there's currently no streams going on, the last stream ended at 11:17 and Plex is showing nothing running under Status>Alerts

type = drive
client_id = ***
client_secret = ***
scope = drive
token = ***

type = cache
remote = gdrive:/Media
chunk_size = 10M
info_age = 1d
chunk_total_size = 10G

Do you have someone syncing a lot of content? Is deep analysis or something on in Plex? It looks like something got turned on or someone is syncing something perhaps.

Having no streams and your API is getting pounded means someone is doing something.

The cache backend is what you are using:


Based on your config/mount command.

I'd probably nuke my client/secret and start over unless you can figure out what you is using it.

As for allow_other, what that does is let a different user access your rclone fuse mount. In most setups, Plex is running as the plex user and the server has a different user so you need allow_other to have the information be seen by another user. On a shared seedbox though, this would be bad and should not be used. It comes down to use case as most guides are written for standalone systems.

I don't believe anyone is syncing, I just share it with family and a few friends and I don't have the sync option checked on any of their accounts. Is deep analysis under 'Scheduled Tasks'? Is it possible that Plex was running any of these background tasks when I mounted the drive again, I also had my server offline for the past day and just turned it on prior to mounting the drive again.

What do you recommend using as my mount command then, should I just revert back to using what I've been using, "rclone mount --cache-db-purge gcache: ~/mnt/gdrive &" or use something different?

Perform extensive media analysis will download completely every file you have during the maintenance window so that's definitely something I always keep off.

My settings are documented here as I don't use the cache backend:

hmm, I just unchecked that, do you think it's possible that those high calls were from Plex running stuff in the background since I had just booted the server up and mounted the drive in conjunction with the 1 active stream, or do you think it still wouldn't be that high?

For the past ~20minutes or so I think the traffic is probably back to how it should be

If you are sure no one else is sharing anything, yes, that is probably the case.

Now, I'm just worried about scanning my libraries and whether or not I'll be snubbed with another ban

Unfortunately, there is no way to tell the download or upload quotas. It really isn't a ban, you have just consumed a quota for a 24 hour period and it would reset.

It does depend on how much you blow past it as sometimes it takes longer, but since it's not documented, nor can you check, there isn't much to do but wait.

Hmm...do you have any idea how Plex handles scanning in/analyzing media from gdrives, does it like "download" the entire file or something, so would a show that's over 1TB in space make me hit the download limit for the day?

A normal scan only downloads partial chunks of the file so using rclone is fine. I have a 100TB+ on my GD and it works quite well.

Backtracking to when you were talking about the odd mount command, should I just go back to the mount command that I was previously using, or use something different?

I'd use whatever worked for you before. I'm off the mindset to not change things that were working assuming the reason was the deep analysis being turned on as the root cause.

Do I need to do anything differently, or delete the cache-backend files if I go back to the normal mount command? I don't know if buffer size or any of those other flags would mess with things.

I'm hoping that being checked is what caused the lockout in the first place, I was in the middle of switching over to my backup when I realized the root path for all the files didn't match my main drive so I cancelled the scan, but my whole M folder for TV shows is going to need to be rescanned with the main drive to fix the path back, I'm just hoping I don't exceed the download quota when it has 125 shows in the folder.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.