Bandwith drops after parse errors / limits are reached

here you can see the peak based on the two last test copies I ran. The second is keeps running.

"rclone copy gcrypted:music_library/ /mnt/user/MURRAY/Music/ --exclude "cleanup/" --progress --log-file=/mnt/user/MURRAY/rclonelog.txt --user-agent="Mozilla/5.0" -vv --tpslimit 10 -vv --exclude "07-2022/" --fast-list --progress

Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 8m33.4s

I guess something happens in the background, but I'm not sure what, exactly.

The log shows me a lot of 403 errors, but all of them seem to be 1/10 and as you can see, I'm not hitting the limits when checking the API graphs. Right ?

If you are seeing 403s in the log file and not seeing them in the Google Admin Console, something is wrong as I don't think you are using the client ID/secret.

If you recreated something, you may want to

rclone config reconnect YOURREMOTE:

And let it refresh and see.

Is something else using that oauth ID? I'd make one specific for what you are doing so you are 1000% sure that it is using the right one and you can see traffic in the console:

You can make any many apps as you need and they all tie back to the user anyway so it's no harm and helps to separate traffic.

as I did delete the only one I had before, there is no way anything else could be using this clientID.

I can see errors :

It's just that's the error rate is quite low:

Since I reconfigured rclone using the newly created clienID, I guess tpslimit now works, and thats the reason why my rclone client is scanning files, and showing me the excludes one by one, without having started the actual copy yet ?

If your quotas as the defaults:


I still don't understand if you are seeing the app name properly in that screen and it's incrementing, with a small tps limit of 10, there shouldn't be any rate limiting going on.

I'd be interested to see if you start at say 1 and work you way up, does it eventually break. If so, what breaks.

If it fails at 1 as an example, either Google has something busted for you or rclone isn't honoring the tps limit.

aaaaand here we are again :

Once I click on google API, I reach this screen :

I can't see any graphs there, only the following in the below table

And here are the quotas

But one more time, I re-created the clientID, re-created the remote using rclone config, using clienID and secret, and the behavior is one more time the same, it starts at full speed, then drops to a few kbytes per second.

I restarted the transfer using tpslimit=1 and it directly started copying at super low rates

So we can't really mix issues here as if you are not getting rate limited, the other thread is more applicable:

Rclone mount random slow speeds - Help and Support - rclone forum

As Google has been slow for just about everyone at different spots around the world.

If the rate limiting issue isn't happening, I'd read that other thread as that's more relevant.

running it with only 1 query per second seems to totally avoid the 403's. But the bw still drops. I don't even reach a kbit now.

Not sure what these guys on the other thread are discussing, I'll have to dig it.

But in short, it seems like google is capping us ? right ? or is it a peering issue that suddenly happened? I really, never had any issue downloading my files. The only thing that happened, is I decided to grab my online content back locally. Meaybe do they cap people that downloads more than XX terrabytes in a short period of time ?

No. It just depends on which IP www.googleapis "decides" to answer. Some are really slow.

And this is something new, right ? I've never, ever had an issue in the past.
Anyway, lets go to the other thread I guess.

Still there is something weird. I dont believe my issue is due to slow google servers.

Why do I get al these User rate limit exceeded events, and only do see I actually use 6% of the max :

I just did another test. I used the same user (same google account) and tried to start a copy to a NEWLY created team drive.

Simply copied a test file (a rclone log for instance) of 100Mb. As soon I started the copy, I got the same exact 403 errors again. So my guess is it is user related. What other type of API limit could I be reaching ? and where should I go to display the graphs ?

There isn't any place to easily see exactly what is being rate limited and why because Google doesn't share that. You can see the aggregate values over time / per 100 seconds, but that's about it.

Seeing the fact no other person is reporting rate limiting, there aren't too many other ideas I can think of.

  1. You aren't using what you think you are albeit, your screenshots seem to make that unlikely but I can't be sure
  2. Your API is messed up and you'd ask Google Support. Doubtful they will say much.
  3. You hit an odd regression with a version. You can downgrade back and test and see if other earlier versions produce the same issue or not

You said 1 didn't give you 403s, but you didn't seem to go to 2,3,4,5,etc and get a good test base.

The speeds aren't super relevant as there is a whole thread about bad speeds with Google so let's not mix issues.

you are right, I will --tpslimit 2, 3, 4 .... and see when 403's appears again.

Questin does tpslimit concern every single aspect of the API ? I mean, file list requests, file names, copies, ???

Yes, every transaction would be part of the overall limit.

1 Like

this is quite straightforward :

tpslimit = 3 -> absolutely NO 403's.

But we clearly can see I started the transfer, and that it suddenly drops to barely no bandwidth at all.

I will try to roll back to a previous version of rclone and see if this could be the problem. This started after latest update (1.59)

So 1/2/3 work but when you go to 4, you get 403s? Please just ignore the speed part as that's a separate issue.

4 also no 403s...

I basically run a transfer, put -vv in a log and "cat log.txt | grep "403"" it.

the API graphs shows no error happened

but lets face it, as you can see in the above screenshot, the second test I made (tpslimit 4) shows no 403 which is fine, but rclone is totally useless now, I can't transfer any files for more than XX minutes to my local storage. So bandwidith definitely is an issue.

I think you are misunderstanding me.

We have two problems to solve here and you can't fix both at the same time. I shared another threads with weeks of information on slowness with Google and ways to get around that.

If you want to solve the 403 issue, we need to step through it methodically to figure it out. If you'd like to continue with the 403 issue, step up the TPS and see where it breaks. Once you have a breaking point, validate that with a few different versions and see if perhaps a regression came up causing an issue.

I've see no other reports of rate limits from anyone so that feels unlikely, but your specific situation/use might be unique.

We won't know anything for sure unless we take a pragmatic approach to fix it. If you want to fix it fast, I'm not your guy. If you want to get to the root cause, we need to step through it slowly knowing what we change and where it breaks.

You need to make the call here as it's your time as well and decide how important it is for you to figure out as I can only offer my thought process as I don't use Google Drive and I get paid the exact same if you work or don't work (that's my attempt at humor).

haha yeah, well, I will have to keep testing tomorrow, it's getting quite late over here :wink:

See you tomorow @Animosity022 and thanks again for the great support.

I'll figure out the breaking point + will downgrade to see if going back to 1.5!.1 fixes something. Stay tuned.

on more test run overnight, I started a transfer I've left running for approx 9 hours :

As usual, it started OK.

tpslimit = 4

I've finally got two 403's at 8am. But nothing significant really.

the graphs:

here we see the copy started at approx 11.23pm

And the drop happened around 11.37pm

I did use a manually installed rclone 1.58 to run this test, so it is unfortunately not related to update 1.59.

We also can see that tpslimit finally get triggerred when using tpslimit 4

Hmm, at that point, I'm not quite sure as it's seems an account problem. Perhaps trying a support call to see if they can give some details might help.

@ncw - any other thoughts? Seeing the OP hit an issue with 403s with a tpslimit of 4 with the quotas being so high makes no sense to me. If rclone itself was misbehaving, I think more people would be chiming in.