403 Error but not rate limited

I’m getting this across my rclone instance, and I’m not seeing any throttling. I’m using my own client ID and client auth… any thoughts what is happening or what I can say to Google Support when they eventually contact me.

I’ve checked the oauth token is valid and not expired as well.

Thanks

Steve

2016/12/19 14:04:06 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2016/12/19 14:04:06 HTTP RESPONSE (req 0xc4206d61e0)
2016/12/19 14:04:06 HTTP/1.1 403 Forbidden
Access-Control-Allow-Credentials: false
Access-Control-Allow-Headers: Accept, Accept-Language, Authorization, Cache-Control, Content-Disposition, Content-Encoding, Content-Language, Content-Length, Content-MD5, Content-Range, Content-Type, Date, GData-Version, Host, If-Match, If-Modified-Since, If-None-Match, If-Unmodified-Since, Origin, OriginToken, Pragma, Range, Slug, Transfer-Encoding, Want-Digest, X-ClientDetails, X-GData-Client, X-GData-Key, X-Goog-AuthUser, X-Goog-PageId, X-Goog-Encode-Response-If-Executable, X-Goog-Correlation-Id, X-Goog-Request-Info, X-Goog-Experiments, x-goog-iam-authority-selector, x-goog-iam-authorization-token, X-Goog-Spatula, X-Goog-Upload-Command, X-Goog-Upload-Content-Disposition, X-Goog-Upload-Content-Length, X-Goog-Upload-Content-Type, X-Goog-Upload-File-Name, X-Goog-Upload-Offset, X-Goog-Upload-Protocol, X-Goog-Visitor-Id, X-HTTP-Method-Override, X-JavaScript-User-Agent, X-Pan-Versionid, X-Origin, X-Referer, X-Upload-Content-Length, X-Upload-Content-Type, X-Use-HTTP-Status-Code-Override, X-Ios-Bundle-Identifier, X-Android-Package, X-YouTube-VVT, X-YouTube-Page-CL, X-YouTube-Page-Timestamp
Access-Control-Allow-Methods: GET,OPTIONS
Access-Control-Allow-Origin: *
Alt-Svc: quic=":443"; ma=2592000; v=“35,34”
Cache-Control: private, max-age=0
Content-Type: text/html; charset=UTF-8
Date: Mon, 19 Dec 2016 13:04:06 GMT
Expires: Mon, 19 Dec 2016 13:04:06 GMT
Server: UploadServer
X-Guploader-Uploadid: AEnB2UqEjGy12rLAYfFArnoCLId9g07z96rb0xUpx3Aaf_dGWdZ7nX2RW6jwtHU8Q9uf0NvvmDxhK31ItumFkKmRnzjLdQmYXw1BElQccwcNMLu6yhPqNaQ
Content-Length: 0

2016/12/19 14:04:06 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2016/12/19 14:04:08 : Dir.Attr

Is this still a problem? Drive seems to quite often throttle people for a while.

Looking up the project shows that the daily quota is definitely no being reached, however, it looks like user ‘drive.files.list’ has been reaching it’s Queries per 100 seconds quota at times. Basically rclone isn’t respecting the a throttle down request from google when doing drive.files.list queries.

Google should send a rateLimitExceeded message which rclone will obey in this case. How do you figure that rclone isn’t respecting a throttle message?

I can tell because I’m looking at the API method from my console, and it’s clearly NOT throttling back till gDrive says “no more”

hmm, do you think there is a message that rclone is ignoring or treating incorrectly?

@ncw Is there anyway I can help you test this? I have two spare drive accounts I’d be willing to hammer if you’d like to try and narrow this down.

I also have a few spare accounts that I can hammer and try and help.

would love to be able to use rclone as a mount with drive without the bans

I even can have Google Support (Sarah) help us with whatever needs to be helped as she is willing to work on figuring out what we are doing wrong.

Thank you all for your offers to help :slight_smile:

I’ve had a look through the rclone error handling and there could be cases which I’ve missed.

Probably the thing to try to would be to run something like this repeatedly

rclone -v --dump-bodies --log-file ls.log ls drive:

until it goes over quota or goes wrong, then have a look back through the logs so see exactly what error messages rclone received.

In the case of an ls command, I’d expect rclone to stop and give an error, or retry so it should be just a question of looking quite near the end of the log.

So I’ve tried via ls, and it’s not killing my connection this time.

What would suggest looking forward, is a DB with the file structure of the drive, and just request updates from google drive asking what has changed and update the DB with the change in structure. Currently it seems to be pulling the new directory every 5 mins even if it hasn’t changed meaning lots of time spent re-reading a directory that will never change.

Just a thought

Hmm!

There are some issues about that on github. Keeping a local copy of the remote file system isn’t something I’ve wanted to do since rclone works without any local state at all (other than the config file) which makes it much more like rsync. However it could be optional, indeed.

I really hope this can help.
I really don’t know how to use gdrive with plex.
Tnx for your work.

Just reporting back on the command you suggested @ncw

I ran it for over 24 hours and I never got an error, had it running in a loop.

I can confirm that doing the LS command doesn’t cause the problem, but scanning with Plex does cause the problem.

That is useful information.

What is the difference between the two? What does plex do when scanning? Does it just look through the directory tree, or does it do more?

If you were to capture a log with -v --dump-headers of plex scanning that would be very useful.

egrep '(GET|HEAD|POST|PUT)' logfile that will show what operations it is doing which might give a clue.

Directory listings look like this GET /drive/v2/files?alt=json&maxResults=1000&q=trashed%3Dfalse+and+%271tS-dTCBgT7nfSQr01djYMcFpQnZTBQ%27+in+parents

What might be happening is that the root directory is being listed many times or something like that.

I’m doing one with the default client id (ie not mine) and will report back

https://smoke-mirrors.box.com/s/jm0jebg0p3wx7z1ygx3wnuovlzppzlyn

This is a snippet of the log. If you want access directly to the server that’s having problems, happy to take an ssh key to let you login

Steve

Thanks.

The account is in a locked out state at the moment by the look of it.

I could do with a log of a complete scan for plex when it is in a working state, or best when it starts working and then stops working.

That is likely to be quite big though.

I don’t really want to go poking around on your server but it might be an easier way of transferring big files?

I made a new Plex library, mount with “rclone mount DK: /home/plex/.DK --max-read-ahead 400M --checkers=40 --allow-other -v --dump-headers --log-file DK.log &” It’s scanning now, I’ll send you the full log once I get the 403 error.