Bandwith drops after parse errors / limits are reached

That just a snippet of a log, can you share the full log?

And as @asdffdsa said, please post the rclone.conf with the keys/secrets redacted. Basically, everything in the template as it saves so much trouble/confusion.

Sorry guys I did not expect a 179Mo log to be just a snippet :wink:

I will provide you with the conf file (will hide the keys/secrets of course) + the full log (but what else do you need in the logs? I ran the copy using -vv, and pasted the full thing, I simply stopped the transfer once I did see the bw was low. So what am I expected to send ?)

The full log is what I’d expect so if you can include that, it would be great.

Well, as the only thing that could have changed since my last transfer (43TB transferred at full speed with no issue) is rclone updated to 1.59, I decided to update to the latest beta (1.60) and give it one more try.

I'm running the exact same transfer, and no more drops. updating to 1.60 beta fixed the issue. I am surprised to be the only one reporting this. But ok, it seems to be running fine now. No clue what happened with 1.59 on my end.

Without a full log file, it'll be a universal mystery.

Is it related to the following topic, https://forum.rclone.org/t/rclone-mount-random-slow-speeds/31417/213? Many of the google api endpoints appear to be rate limited for the past few weeks. Setting my hosts file to an tested good ip for www.googleapis.com has fixed it for me.

That thread folks weren't seeing rate limits, just slow performance on certain IPs. By going around slow IPs/bad peering, they fixed their issues by going a better path.

Reason I keep asking for a full logfile as that'll help articulate the problem and see if it's the same or not based on the other thread...

OK I spoke too soon. The transfer ran for some hours without problems with 1.6 beta.

Here is the log @Animosity022 :

https://zerobin.net/?90e238d94b8df063#uAfK8oOQs8ulDY/nFtHGjhiZ9eelNI8PuuKdqebrZOU=

This log contains several transfer sessions. The first one one I ran with beta 1.60 starts at line 6351. and the last one was run at line 13781.

What I can see there is the pacer limit gets reached as soon as the transfer gets started. As I did empty the destination directory, I guess this is simply due to rclone listing the files amongst the folders it has to start copying?

One more time, the transfer ran at full speed with no issues for a while. And now runs at a few kbits/s.

I am not sure why my download speed would be capped like this, as I am supposed to be allowed to download my own files from my gdrive + always heard the download limit is 10TB daily.

fun fact, the download speed reported by rclone is still 42MB/s, but you can clearly see that the transfers are not reaching that speed. Also, I'd have transferred way more data in 17 hours run time.

one strange thing, I can see some weird config files / token debug messages in there telling me about invalid tokens.

Tried one more thing, for the sake of testing. I deleted my rclone.conf file, uninstalled rclone, deleted everything related to it. Reinstalled rclone 1.60 beta, re-did a very basic rclone config.

Running the transfer again. And it drops after a while (this time after having transferred 150gig).

so I stop the transfer and start it again, and as soon I start it :

2022/07/18 11:06:06 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:06 DEBUG : pacer: Rate limited, increasing sleep to 1.469504226s
2022/07/18 11:06:06 DEBUG : pacer: Reducing sleep to 0s
2022/07/18 11:06:06 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:06 DEBUG : pacer: Rate limited, increasing sleep to 1.756987247s
2022/07/18 11:06:06 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:06 DEBUG : pacer: Rate limited, increasing sleep to 2.713859471s
2022/07/18 11:06:06 DEBUG : pacer: Reducing sleep to 0s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 1.826227104s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 2.531745146s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 4.98820478s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 8.947909724s
2022/07/18 11:06:07 DEBUG : pacer: Reducing sleep to 0s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 1.797902858s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 2.426399856s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 4.115923168s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 8.563212721s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 16.3609193s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 16.666002215s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 16.229239326s
2022/07/18 11:06:09 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:09 DEBUG : pacer: Rate limited, increasing sleep to 16.378599098s
2022/07/18 11:06:09 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:09 DEBUG : pacer: Rate limited, increasing sleep to 16.800749811s
2022/07/18 11:06:09 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:09 DEBUG : pacer: Rate limited, increasing sleep to 16.432552319s

2022/07/18 11:06:10 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:10 DEBUG : pacer: Rate limited, increasing sleep to 16.909369389s

This is insane, google makes it impossible for their users to download their files now ? why would they do this ?

If you can include that, would be helpful.

What's "/mnt/user/MURRAY/Music2/"? Local disk?

I'd consider deduping the source as these are scattered through so you aren't going to get a consistent copy:

2022/07/17 12:04:12 NOTICE: 06-2020/E'voke ‎- Arms Of Loren [FESCD 10]: Duplicate directory found in source - ignoring
2022/07/17 12:04:12 NOTICE: 06-2020/Guillaume Tiger - Surface - Binaural Live Session (2020) [WEB 24bit FLAC]: Duplicate directory found in source - ignoring
2022/07/17 12:04:12 NOTICE: 06-2020/Jim O'Rourke - Steamroom 49 (2020) [WEB 24bit FLAC]: Duplicate directory found in source - ignoring
2022/07/17 12:04:13 NOTICE: 05-2021/Various - Ne - 1988 [Cassette RIP] [FLAC 16-44]: Duplicate directory found in source - ignoring
2022/07/17 12:04:13 NOTICE: 05-2021/de Witte, Charlotte - Formula EP (2021 WF): Duplicate directory found in source - ignoring

--fast-list would be helpful.

You are back to 16 again.

What's the error rate show in the Google console for that snippet when the you were logging errors? It should look something like this from the console.

FYI, last transfer I did remove transfers=16. So I was back to 4 concurrent copies.
Same problem happened.

MURRAY is my local array, yes. The same one I did copy 42tb to a few weeks ago, without any problem. And never had a speed issue.

The config did not change.
Here is it's content :

[gdrive]
type = drive
scope = drive
token = {"access_token":"xxxxxxxxxxxx ","token_type":"Bearer","refresh_token":"xxxxxxxxxx","expiry":"2022-07-18T14:50:14.899929608+02:00"}
team_drive =

[gcrypted]
type = crypt
remote = gdrive:gdrive-crypted
password = xxxxxx
password2 = xxxxxxx

strangely, I have the feeling the bandwidth starts to drop when I reach 150gb at each transfer. If I wait for a while and restart the transfer, it will start full speed again, then will fall to a few kb again.

hi,
about that config file, should use your own client id/secret
https://rclone.org/drive/#making-your-own-client-id

1 Like

That's pretty much the fix here so that's why we ask for that stuff up front as you/we've wasted a ton of your time/mytime as the default rclone keys are oversubscribed and that's why you get those messages.

That's why tpslimit of 10 won't help.

Please make your own client ID/secret and retest as that will greatly impact your performance.

guys I obviously did create a project back in the days. I removed my details from the pasted conf file as it would be as simple as copy/pasting it to get access to my google drive ! :slight_smile:

One more time, this started to drop like 10 days ago, and I am using rclone for years.

I will make sure to re-follow your guide, as it might be that something screwed when I moved to my new server. but I usually was simply transfering the rclone.conf from one server to another to find all my remotes back without any issue.

I will check what happens once i'll have re-configured rclone with my clientID from scratch.

I did figure this out though :

It looks like google has to RE-validate something ?

When I try to reconfigure my Oauth consent screen, I am requested to enter more information that the one listed in your tutorial :

it asks me to enter a valid domain, etc.
any clue ?

If you don't share the fact it exists, how would we possibly know it's there as it's not included? We ask to redact/remove the secret part so we know it exists and it isn't obvious by any means as generally folks do not make one.

Any chance you can post the actual screenshot I asked about and shared an example of? That's the bit I'm looking for...

But now I did delete the old one as it needed verification and need to create a new one. But it keeps asking me to enter a domain etc.

OK so I created a totally new client ID and removed the "sensitive" thingy in order to get faster validation.

I recreated a totally new rclone config using the new clientID and secret.

Running this now :

rclone copy gcrypted:music_library/ /mnt/user/MURRAY/Music/ --exclude "cleanup/" --progress --log-file=/mnt/user/MURRAY/rclonelog.txt --user-agent="Mozilla/5.0" -vv --tpslimit 10 -vv --exclude "07-2022/" --fast-list

Lets see if it keeps crashing ! I'll keep you posted.

By the way, thank you so much for your efforts helming me here guys, it is MUCH appreciated.

Deleted the old Client ID/Secret? Deleted the old one what? Consent Screen?

Are you a GSuite / Education / Personal user?

The errors you are seeing with TPS limit "look" like you aren't using a Client ID/Secret that's connected as that's why TPS limit does not seem to work.

My app is just called test as I don't use Google Drive anymore. You should be able to toggle by Credential in the screenshot I'm sharing and put it to something like 6 hours or 24 hours depending on what you got the errors. The more exact you can be, the better the graph can help as it won't average it out as much.

image

and my test app.

If you reproduce in the 1 hour or 6 hour window, that should mirror what you are seeing in the logs if you are being rate limited. If you are not seeing traffic, the client ID/Secret isn't connected.

here you can see the peak based on the two last test copies I ran. The second is keeps running.

"rclone copy gcrypted:music_library/ /mnt/user/MURRAY/Music/ --exclude "cleanup/" --progress --log-file=/mnt/user/MURRAY/rclonelog.txt --user-agent="Mozilla/5.0" -vv --tpslimit 10 -vv --exclude "07-2022/" --fast-list --progress

Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 8m33.4s

I guess something happens in the background, but I'm not sure what, exactly.

The log shows me a lot of 403 errors, but all of them seem to be 1/10 and as you can see, I'm not hitting the limits when checking the API graphs. Right ?