Bandwith drops after parse errors / limits are reached

What is the problem you are having with rclone?

I am in the process of downloading a bunch of stuff I had stored in the cloud on my new unraid server. mostly roms, and small files. Problem is, it represents several hundreds of thousand of files, and it seems the action of verifying the files makes me hit the API limits.

FYI. I've checked and I should be allowed to reach 100000 hits per 100 seconds.

I do start the copy with --vv and can see very quickly that I start getting parse errors, telling me the limit is reached. as soon this happens, my full speed downloads (45MB/s max at home) starts to fall, then end up at a terrible speed of approx 600KB/s while I keep getting the following error, and even longer sleep values:

2022-07-16 00:00:46 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022-07-16 00:00:46 DEBUG : pacer: Rate limited, increasing sleep to 16.223130348s

I also got another, more detailed version of the error when trying to copy music files, I did not locate on a shared drive:

"2022-07-15 23:28:15 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: userRateLimitExceeded)"

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.0

  • os/version: slackware 15.0 (64 bit)
  • os/kernel: 5.15.46-Unraid (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.18.3
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google drive enterprise (where the files are stored)
Unraid (local destination folder)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy team2:Games/No-Intro/ /mnt/user/Emulation/Roms/ --progress --size-only --transfers=16 --vv

A log from the command with the -vv flag

the only errors that needs to be noticed are pasted above. really. 

So my question is, how can I grab rest of my files at a decent speed ?

Thanks ! :slight_smile:

Don't use 16 transfers.

Remove it and use the default.

I just did restart the transfers (the next day, actually) and it ran full speed for a while. After approx 200gig being transferred, it's back to 200Kbit/s.

Transferred: 267.248 GiB / 1.721 TiB, 15%, 19.684 KiB/s, ETA 2y27w2d15h53m20s
Checks: 84120 / 84120, 100%
Transferred: 11527 / 21551, 53%
Elapsed time: 3h40m51.5s

Strange thing is I cannot see any specific API hit error anymore. It's really like if google now do cap downloads once the are hitting 200gb or something.

I did try to run it without the 16 concurrent transfers, and same problem is present.

Without seeing a log file, it's a not fun guessing game. If you want to include a log file, happy to assist.

1 Like

The problem is that, due to all these checks, the log file grows above 170MB in a matter of minutes.

only way I could figure out to share it is this google drive link :

What is important to note is that everything goes fine till "something" happens, and I then seem to be capped at super slow speeds. I can't see any errors in the logs anymore. what you can see is the transfer rate goes lower and lower as long as the transfer runs.

Now, each time I run the transfer again, the speeds are terrible. If I give it one more try, tomorrow, it will process some gigs at 45MB/s before it caps to a few kbits again.

Try adding --tpslimit 10 when the rate limiting resets. This should hopefully stop the rate limit kicking in. If not lower the number.

Google rate limits are mysterious and undocumented so this will require some trial and error.

Yes I did try this yesterday while investigating. I also tried the user agent thing. Same problem.

I really don't get this, as I never had any issue downloading stuff, and as we are supposed to be allowed to download 10tb daily without issue.

I just tried copying another folder:

rclone copy gcrypted:music_library/ /mnt/user/MURRAY/Music/ --exclude "cleanup/" --progress --user-agent="Mozilla/5.0" -vv --tpslimit 10

Same issue.

it starts at 45MB/s, I'm getting these errors listed above -> ""2022-07-15 23:28:15 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: userRateLimitExceeded)"

And suddenly BW drops. I'm never gonna be able to download my files locally that way :frowning:

Thinking this might be related to the check of the already downloaded files (hash/size), I tried to copy the files to a totally new, empty folder. (Music2)

rclone copy gcrypted:music_library/ /mnt/user/MURRAY/Music/ --exclude "cleanup/" --progress --user-agent="Mozilla/5.0" -vv --tpslimit 10

SAME problem again.

hi,
did not see the redacted config file?
does it contain a client id/secret for the gdrive remote?

and i would assume to use --fast-list

That just a snippet of a log, can you share the full log?

And as @asdffdsa said, please post the rclone.conf with the keys/secrets redacted. Basically, everything in the template as it saves so much trouble/confusion.

Sorry guys I did not expect a 179Mo log to be just a snippet :wink:

I will provide you with the conf file (will hide the keys/secrets of course) + the full log (but what else do you need in the logs? I ran the copy using -vv, and pasted the full thing, I simply stopped the transfer once I did see the bw was low. So what am I expected to send ?)

The full log is what Iā€™d expect so if you can include that, it would be great.

Well, as the only thing that could have changed since my last transfer (43TB transferred at full speed with no issue) is rclone updated to 1.59, I decided to update to the latest beta (1.60) and give it one more try.

I'm running the exact same transfer, and no more drops. updating to 1.60 beta fixed the issue. I am surprised to be the only one reporting this. But ok, it seems to be running fine now. No clue what happened with 1.59 on my end.

Without a full log file, it'll be a universal mystery.

Is it related to the following topic, https://forum.rclone.org/t/rclone-mount-random-slow-speeds/31417/213? Many of the google api endpoints appear to be rate limited for the past few weeks. Setting my hosts file to an tested good ip for www.googleapis.com has fixed it for me.

That thread folks weren't seeing rate limits, just slow performance on certain IPs. By going around slow IPs/bad peering, they fixed their issues by going a better path.

Reason I keep asking for a full logfile as that'll help articulate the problem and see if it's the same or not based on the other thread...

OK I spoke too soon. The transfer ran for some hours without problems with 1.6 beta.

Here is the log @Animosity022 :

https://zerobin.net/?90e238d94b8df063#uAfK8oOQs8ulDY/nFtHGjhiZ9eelNI8PuuKdqebrZOU=

This log contains several transfer sessions. The first one one I ran with beta 1.60 starts at line 6351. and the last one was run at line 13781.

What I can see there is the pacer limit gets reached as soon as the transfer gets started. As I did empty the destination directory, I guess this is simply due to rclone listing the files amongst the folders it has to start copying?

One more time, the transfer ran at full speed with no issues for a while. And now runs at a few kbits/s.

I am not sure why my download speed would be capped like this, as I am supposed to be allowed to download my own files from my gdrive + always heard the download limit is 10TB daily.

fun fact, the download speed reported by rclone is still 42MB/s, but you can clearly see that the transfers are not reaching that speed. Also, I'd have transferred way more data in 17 hours run time.

one strange thing, I can see some weird config files / token debug messages in there telling me about invalid tokens.

Tried one more thing, for the sake of testing. I deleted my rclone.conf file, uninstalled rclone, deleted everything related to it. Reinstalled rclone 1.60 beta, re-did a very basic rclone config.

Running the transfer again. And it drops after a while (this time after having transferred 150gig).

so I stop the transfer and start it again, and as soon I start it :

2022/07/18 11:06:06 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:06 DEBUG : pacer: Rate limited, increasing sleep to 1.469504226s
2022/07/18 11:06:06 DEBUG : pacer: Reducing sleep to 0s
2022/07/18 11:06:06 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:06 DEBUG : pacer: Rate limited, increasing sleep to 1.756987247s
2022/07/18 11:06:06 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:06 DEBUG : pacer: Rate limited, increasing sleep to 2.713859471s
2022/07/18 11:06:06 DEBUG : pacer: Reducing sleep to 0s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 1.826227104s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 2.531745146s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 4.98820478s
2022/07/18 11:06:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:07 DEBUG : pacer: Rate limited, increasing sleep to 8.947909724s
2022/07/18 11:06:07 DEBUG : pacer: Reducing sleep to 0s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 1.797902858s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 2.426399856s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 4.115923168s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 8.563212721s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 16.3609193s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 16.666002215s
2022/07/18 11:06:08 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:08 DEBUG : pacer: Rate limited, increasing sleep to 16.229239326s
2022/07/18 11:06:09 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:09 DEBUG : pacer: Rate limited, increasing sleep to 16.378599098s
2022/07/18 11:06:09 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:09 DEBUG : pacer: Rate limited, increasing sleep to 16.800749811s
2022/07/18 11:06:09 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:09 DEBUG : pacer: Rate limited, increasing sleep to 16.432552319s

2022/07/18 11:06:10 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2022/07/18 11:06:10 DEBUG : pacer: Rate limited, increasing sleep to 16.909369389s

This is insane, google makes it impossible for their users to download their files now ? why would they do this ?

If you can include that, would be helpful.

What's "/mnt/user/MURRAY/Music2/"? Local disk?

I'd consider deduping the source as these are scattered through so you aren't going to get a consistent copy:

2022/07/17 12:04:12 NOTICE: 06-2020/E'voke ā€Ž- Arms Of Loren [FESCD 10]: Duplicate directory found in source - ignoring
2022/07/17 12:04:12 NOTICE: 06-2020/Guillaume Tiger - Surface - Binaural Live Session (2020) [WEB 24bit FLAC]: Duplicate directory found in source - ignoring
2022/07/17 12:04:12 NOTICE: 06-2020/Jim O'Rourke - Steamroom 49 (2020) [WEB 24bit FLAC]: Duplicate directory found in source - ignoring
2022/07/17 12:04:13 NOTICE: 05-2021/Various - Ne - 1988 [Cassette RIP] [FLAC 16-44]: Duplicate directory found in source - ignoring
2022/07/17 12:04:13 NOTICE: 05-2021/de Witte, Charlotte - Formula EP (2021 WF): Duplicate directory found in source - ignoring

--fast-list would be helpful.

You are back to 16 again.

What's the error rate show in the Google console for that snippet when the you were logging errors? It should look something like this from the console.

FYI, last transfer I did remove transfers=16. So I was back to 4 concurrent copies.
Same problem happened.

MURRAY is my local array, yes. The same one I did copy 42tb to a few weeks ago, without any problem. And never had a speed issue.

The config did not change.
Here is it's content :

[gdrive]
type = drive
scope = drive
token = {"access_token":"xxxxxxxxxxxx ","token_type":"Bearer","refresh_token":"xxxxxxxxxx","expiry":"2022-07-18T14:50:14.899929608+02:00"}
team_drive =

[gcrypted]
type = crypt
remote = gdrive:gdrive-crypted
password = xxxxxx
password2 = xxxxxxx

strangely, I have the feeling the bandwidth starts to drop when I reach 150gb at each transfer. If I wait for a while and restart the transfer, it will start full speed again, then will fall to a few kb again.