403 error messages with cached remote

Tonight i discovered that none of my media would playback and after browsing the error log i found several 403 quota exceeded messages. I never had this issue since using a cache remote. Did anything change here?

So that means you are requesting too many API hits. What’s your settings? Are you using your own key?

This is my configuration:

[gdrive]
type = drive
token = XXX
team_drive =

[gdrive_crypt]
type = crypt
remote = gdrive_cache:Video/
filename_encryption = standard
password = XXX
password2 = XXX

[gdrive_cache]
type = cache
remote = gdrive:/
plex_url = http://127.0.0.1:32400
plex_username = XXX
plex_password = XXX
chunk_size = 10M
info_age = 30m
chunk_total_size = 20G
plex_token = XXX

The setup using a cached remote was running without bans for several months now so i am curious why it started to happen right now.

Are you using your own API key or the default rclone?

403s aren’t bans. It means you are requesting too many API hits so it’s throttling you down a little.

403s aren’t bans. It means you are requesting too many API hits so it’s throttling you down a little.

Thanks, i thought it was different.

I am using the default keys.
I tried to create my own client_id and client_secret, however the process looks very much different from the one described on rclone.org gdrive documentation and i am not sure how not to screw it up.

If someone would like to send me a pull request with the updated instructions I’d be grateful :smile:

Well, it looks like it was actually my fault because my screen reading application dodn’t show the “Create Client ID” as a clickable link and i was using the API creation assistent instead which was asking questions not covered in the rclone docs. Sorry for that.

However, after providing my own client_id and client_secret to the rclone.config file i am still not able to read files from my remote as i am still getting

Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

Can this be manually reset or does it just take it’s time?

Sorry as it’s early and I updated my post, your config looks correct. I thought it was backwards at first read.

You just have to wait it out. It’s usually 24 hours or sometimes less. I don’t understand how you got the ban though as that doesn’t make sense.

@Animosity022 I think you misread the docs. Quoting them:

There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt

@BlindFish222 From what I have read, the quota gets reset at midnight pacific time.

Daily quotas reset at midnight Pacific Time (PT).

Sorry, it’s early as I’ll update my post as I just typed it in wrong in the forum.

@darthShadow: Thanks. So i should be doing well with my wrapping order as i am doing it cloud -> cache -> crypt?

It looks like the creating of a new pair of credentials takes some time to work on my rclone config because when i am mounting my drive right after creation it appears to be empty.

There isn’t any time for the items to appear. Everything is built when accessed, but if you got the ban going on, you have to wait. You should see errors in the logs if that’s the case though.

It gives me an Unauthorized error, therefore i thought it would need some time to make the credentials work:

Failed to get StartPageToken: Get https://www.googleapis.com/drive/v3/changes/startPageToken?alt=json&prettyPrint=false&supportsTeamDrives=false: oauth2: cannot fetch token: 401 Unauthorized

It’s been some time since I made my own API key but I do believe you are correct and it may take a few minutes to sync up.

It’s usually not that long though from my memory.

I refreshed my token and it brought back the contents to my drive and the error message disappeared.

But i am still unable to read from the remote. Reset should have been about 4 hours ago so i am note sure if i’ve done everything correctly and as it should be. Well, let’s wait a little longer and see if it turns out to work properly again.

I really appreciate all the help from you guys!

Well, i need to ask for some thoughts again what’s going wrong with my setup here.

After not being able to read from my GD remote the whole day, it became possible again in the morning although it was just about 10PM PST, but i am ot absolutely sure about that.

I restarted my plex server and saw that most of my tv shows library content was marked as missing, but it was possible to read / copy the dedicated files by command line.

I tried to rescan the plex library section but it turned into a never ending process and the missing content didn’t reappear as available.

The rc log file started to show lots of 403 error messages again. However, i am still able to read folders containing files that are not available within plex libraries such as photos. But reading speed is incredibly low (about 2.5 MB/s), but maybe this is because of the many small files in these folders.

So what can i do to get back access to my files? I’ve never ran into 403 issues since i introduced the cache remote when it was announced. And the only things that i’ve changed within the last few days are adding

–drive-v2-download-min-size 0
–fast-list

to my mount command and generating my own client_ID and client_secret.

I ran into the same situation as you. After running rclone for mounths without problems, I also get 403 error the last days. I also added “–drive-v2-download-min-size 0”. So I think the issue is related to this parameter

I added this switch to solve another issue discussed here..

The --drive-v2-download-min-size should only be added if you need transfer speed’s over 25 MB/s. Otherwise it slowes things down, as this flag adds one API call to every file that is opened with the v2 API.

It is also possible that Google has a more restrictive download quota when using the v2 API.

Adding the --drive-v2-download-min-size switch was recommended after my rates have dropped somewhere between 2 and 5 MB/s and made the whole thing useless for streaming.

As you explain it, it might not necessarily be made to solve the issue as a whole. Right now, i can’t do any further investigation as the 403’s are still there.