Rclone size: hitting Google Drive rate limit 403?

Hello! First post here on the rclone forum (though I’ve lurked it for help many times in past months – thanks, all!) as I’m a bit stumped with this issue. I do know that others have hit the infamous 403 rate limit with rclone/google before, but I didn’t expect it’d happen when running size

Since August, I’ve synced about 90TB of data across innumerable subdirectories from a mounted fileserver directory into Google MyDrive (G Suite) directory. This took many divide-and-conquer syncs from 7 different accounts configured as remotes, but they finally got the last of it synced.

Now, I’d like to weigh the file or object count in Drive against the count on the fileserver. I tried using rclone size remote:dest --tpslimit 8 --fast-list. It works on a small scale, but when I try it on any higher-level folder that contains a little over a million files, it eventually dies:

error listing: couldn't list directory: googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded 2018/10/22 19:11:07 Failed to size: couldn't list directory: googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded

I’m sure there’s a way to check the file count without hitting the rate limit if I can get the data in there to begin with, but what am I missing? I used --tps-limit 8 and --bw-limit 8650k when transferring, for what that’s worth. Should I lower the tps-limit?

Any guidance is appreciated, happy to give more info and context as needed, too! Cheers.

You can read this thread, is very similar

1 Like

What happens if you reduce that tpslimit to 5and try? You can even go lower, usually a single rate limit message just causes rclone to retry after a pause.

I’m no where near the # of files or size as I’m:

 rclone size --fast-list gcrypt:
Total objects: 24209
Total size: 49.010 TBytes (53887514020357 Bytes)

I see a very much reduced time to execute that with compared to the default values.

1 Like

Thanks you both for the advice! I ended up using --fast-list after I made the OP. However, turns out the real trick was including the --bwlimit 8650k option, it finally worked after several hours. and gave me usable output.

Take care! For anyone else who may some day run into this issue, try something like:

rclone size remote:dest --tpslimit 5 --bwlimit 8650k --fast-list