Rclone v1.55.1 throttling and worse on OneDrive copy to GD

What is the problem you are having with rclone?

Typically i have no issues moving data between Google Drive and OneDrive or either way. My standard commands would be like this:
rclone --dry-run --ignore-existing --transfers=5 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k --tpslimit 10 copy -P sourceOD:source_dir destGD:destination_dir -v

But today, all of a sudden i cannot transfer more than 80GB initially and then getting blocked now for over 33 hours on the retry when running -vv.
My config had no personal client_id and secret so decided to create those today, no difference. I've moved hundreds of GB back and forth between these 2 cloud providers and my VPS but this is weird. Just trying to run a rclone size gives me "Too many requests. Trying again in 120296 seconds", yeah that's 1.39 days so it seems aggressive banning right now.
Tried bwlimit of 8M, that gave me less than 100kb on 1 single transfer. I have tried removing all the flags i typically use for Google Drive only, the same issue applies (those flags are --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k)

What is your rclone version (output from rclone version)

v1.55.1 (tested on macOS brew package too)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04.2 LTS 64bit

Which cloud storage system are you using? (eg Google Drive)

source: OneDrive Family plan account
destination: Google Drive One plan

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --dry-run --ignore-existing --transfers=5 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k --tpslimit 10 copy -P sourceOD:source_dir destGD:destination_dir -v

The rclone config contents with secrets removed.

type = onedrive
client_id = 2da0f3d1-e9aa-47ae-b5f7-159c5e24cf31
client_secret = 
region = global
token = 
drive_id = f60e8541cb8f5186
drive_type = personal

A log from the command with the -vv flag

This is just for the size command:
021/06/26 14:02:45 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
2021/06/26 14:02:45 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "size" "sourceOD:source_dir" "-vv"]
2021/06/26 14:02:45 DEBUG : Creating backend with remote "sourceOD:source_dir"
2021/06/26 14:03:00 DEBUG : Too many requests. Trying again in 120296 seconds.
2021/06/26 14:03:00 DEBUG : pacer: low level retry 1/10 (error accessDenied: throttledRequest: Too Many Requests)
2021/06/26 14:03:00 DEBUG : pacer: Rate limited, increasing sleep to 33h24m56s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 25h3m42s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 18h47m46.5s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 14h5m49.875s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 10h34m22.40625s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 7h55m46.8046875s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 5h56m50.103515625s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 4h27m37.577636718s
2021/06/26 14:03:00 DEBUG : pacer: Reducing sleep to 3h20m43.183227538s

That looks like you are being throttled by OneDrive.

Have you done:


Hey, thanks for the reply. I mentioned that in the details, created them successfully but same situation.

There's no rclone.conf so it's a bit of a guessing game and you edited the first post so I have no idea what you changed.

What does your rclone.conf look like with secrets and keys blocked out? Are you running your own user agent as well? If OneDrive is throttling you, there isn't much to do from what I've seen as they tend to do that.

Apologies, i should have mentioned my edits.
I originally posted the OS and rclone as running on macOS as I ran the test there too afterwards but the main work is on Ubuntu as in the config above.

config file added. As you will see, it's the bare minimum, the flags are added to the command line. No issues with GD destination, it's all about OneDrive.

There is a potential fix as it's a doc fix that needs to get pulled:

#### Excessive throttling or blocked on SharePoint

If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"`  

The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)

@Ole is the official OneDrive throttling guy :slight_smile: and he can perhaps add some detail.

Thanks. Just tried and same issue.
I shoudl mention that I added a new config when i created the client_id and secret and both the default client_id and secret as well as my personal one are now throttled.
The personal config is about 12 hours left, the default config entry still says 1.26 days!

I just ran my sync from OneDrive to GoogleDrive and it had the usual speed around 12 Mbyte/s. I use my own client ID on both remotes. The only tuning parameter is --checkers=16, the rest are defaults (that is no --tpslimit, --transfers=4 etc.)

I have found that it is best to stay just under the OneDrive throttle limit all the time, and that corresponds to the rclone default settings in most situations. This is especially important with respect to transfers. OneDrive Personal will only do 4 transfers and may slow down if you try using more. You will sometimes get better performance by 16 (and even 32) checkers if your only do it for a limited period each day (I have a 15 minutes backup job).

Your job above has --transfers=5 and that may have triggered the OneDrive throttling, it typically cools down in 24 hours with no (or very limited) activity. Additional retries or load testing to troubleshoot the already active throttling will only make the throttling worse and extend the cool down period. The cool down period seems to affect all clients using the same Microsoft account. :cry:

I don’t think the --user-agent tip will help you - it is aimed at OneDrive Business (SharePoint).

Some other tips instead: I use --progress --stats 5s to find the optimal settings for my jobs. You may want to tweak --multi-thread-cutoff and/or --multi-thread-streams if you have a high proportion of files above 250MB.

In the above I have assumed that v1.55.1 performed well for a while and now all of a sudden it doesn’t. If your issue started immediately after an upgrade to v1.55.1, then it would help to know which version you upgraded from - and whether a downgrade to the previous version helps.


Thanks @Ole that's really helpful and clear advice. Thanks for testing on your side too.

Will leave it well past the 24 hours and use your suggested flags and ignore the others. It did raise one question though regarding flags like --checkers=16 which i always use for Google drive destinations. Is there a best practice for setting flags so that the right flag applies to only the source or destination, such as the checkers flag only for the Google Drive, does that make sense?

Thanks! :blush:

I guess you will be able to use --checkers=16 all the time; I use it almost all the time (using the environment variable RCLONE_CHECKERS=16).

The only important exception is for rclone check --download where the downloads are performed by the checkers. In this situation I use --checkers=8 --multi-thread-streams=2, that gives roughly 4 concurrent downloads/transfers per remote (with my data).

--checkers and --transfers modify/tune the subcommand (copy,sync,...), they cannot be set for each individual remote. This applies to all the non-backend flags. I set my backend flags in the config file with rclone config command (set-and-forget).

@ncw noted in Print status of file-hashing · Issue #2749 · rclone/rclone · GitHub that this is a ward waiting for a fix.

Thanks for your help @Ole!

Decided to laeave things alone well past the 1.26 days. Started running this command rclone --ignore-existing --checkers=16 copy -P with no other flags as you suggested and it's going great now :slight_smile:
Many mixed file sizes but the current rate is showing ~33 Mbyte/s with no warnings as yet.

Will add the cloud-specific backend config entries added afterwards, really appreciate the info!

1 Like

Hey again @Ole

All done. Copied over 1TB to Google Workspace account. Seemed that no matter what i did, OneDrive was sensitive. What i found was:

  • When i left things overnight and started again using clone --ignore-existing --transfesr=4 --checkers=16 copy... , i would get ~80-90GB before seeing exceeded threshold on the OneDrive side of things. Speed was as you said, around 13-20 Mbyte/s.

  • I could then restart after a few hours using the previous flags or also dropping the transfers to 3, each time i would get speeds on large files up to 30-50 Mbyte/s before it would settle to the previous rate. This would only last for about 20-30GB before having to pause again for a few hours. If i didn't, i could see the execution threshold pause time growing by minutes.

  • Didn't really see any difference between the above flags and the other method -checkers=8 --multi-thread-streams=2. Moving the data from a Google One based account to a Google Workspace has a ridiculously high threshold compared to OneDrive in my past 3 days of migrations.

  • Years ago, i found some university guy who shared his optimal settings for Google transfers (Google to Google) and these work flawlessly for me everytime but, in truth, i don'y fully understand why but it was fast, 40-60 Mbyte/s without errors using -vv. Here are the flags i use for G to G data moves:
    rclone --ignore-existing --transfers=5 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k copy -P....

Hope this helps someone but thanks for the amazing community and support guys!


if you are transferring files between two rclone remotes, check out

hey @asdffdsa

Yeah, spent wayyy too long studying those and trying to figure out what was optimal and required... i just run with what works well until something changes. Thanks for sharing!

Great :sweat_smile:

Thanks a lot for sharing your observations and learnings!

Based on your observations I will conclude that the rclone defaults are best for transfers up to 50-100 GB (or a few hours) - after that we see some additional OneDrive throttle limit that will slow down large (or long) transfers.

Challenge to myself: How do you quickest transfer 200GB mixed data to/from OneDrive with the simplest change(s) to the rclone default parameters (independent of connection speed)?

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.