Sync from local to Sharepoint Online - activityLimitReached

What is the problem you are having with rclone?

Hey all!

I've been testing a setup to sync files from a local disk on a windows machine to a sharepoint online enviroment.
I first did this in a test environment to make sure the command would do as we wanted it to do. This all was done in a test office 365 tenant.
once the command was finetuned, I went on to implement it in the production environment, so an actual sharepoint site on the production tenant. Here, the copy/sync works OK, but after some time I get throttled. With the help of the -vv option I noticed the process get's throttled (which never happened in the test environment).

Can anyone give me some more insights on how the throttling is applied, and better, how to get past this?

I've been playing with -tpslimit option, changing it from 4 to 2 makes that the sync gets throttled after 150MB, instead of 80 MB :slight_smile:

Thanks in advance for your feedback!

What is your rclone version (output from rclone version)

rclone version output:
rclone v1.56.1

  • os/version: Microsoft Windows Server 2012 R2 Standard (64 bit)
  • os/kernel: 6.3.9600.20069 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.16.8
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Source = folder on windows machine (locally available from the windows machine)
Destination = Sharepoint online (when creating a new remote in rclone, i used "26 Microsoft OneDrive")

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy d:\data\ REMOTE:Active --backup-dir REMOTE:Backup/%Day%-%Month%-%Year% --progress --bwlimit 2M --log-file=C:\RClone\logs\manual_sync_log.txt --transfers 2 --ignore-size --ignore-errors --ignore-checksum --create-empty-src-dirs --config %configfile% --checksum --onedrive-chunk-size 320k --fast-list --tpslimit 2 --user-agent "ISV|rclone.org|rclone/v1.56.1" -vv

The rclone config contents with secrets removed.

[REMOTE]
type = onedrive
client_id = ***
client_secret = ***
token = {"access_token":"***","token_type":"Bearer","refresh_token":"***","expiry":"2021-11-16T10:09:29.7570269+01:00"}
drive_id = ***
drive_type = documentLibrary

A log from the command with the -vv flag

2021/11/16 09:46:45 DEBUG : Data/file1.zip: Uploading segment 33423360/98548327 size 327680
2021/11/16 09:46:45 DEBUG : Data/file2.zip: Uploading segment 36700160/503654298 size 327680
2021/11/16 09:46:45 DEBUG : Data/file1.zip: Uploading segment 33751040/98548327 size 327680
2021/11/16 09:46:45 DEBUG : Too many requests. Trying again in 32 seconds.
2021/11/16 09:46:45 DEBUG : pacer: low level retry 1/10 (error activityLimitReached: throttledRequest: The request has been throttled)
2021/11/16 09:46:45 DEBUG : pacer: Rate limited, increasing sleep to 32s
2021/11/16 09:46:46 DEBUG : pacer: Reducing sleep to 24s
2021/11/16 09:46:46 DEBUG :  Data/file1.zip: Uploading segment 34078720/98548327 size 327680
2021/11/16 09:46:47 DEBUG : pacer: Reducing sleep to 18s
2021/11/16 09:46:47 DEBUG :  Data/file2.zip: Uploading segment 37027840/503654298 size 327680
2021/11/16 09:47:18 DEBUG : pacer: Reducing sleep to 13.5s
2021/11/16 09:47:18 DEBUG :  Data/file1.zip: Uploading segment 34406400/98548327 size 327680

I think your options are to ask for more capacity from your administrator (I don't know whether this is possible or not!) or reduce --tpslimit some more.

Search the forums for OneDrive Throttling.

Many topics and lots of feedback.

Check the whole OneDrive documentation. There's one part dedicated to this:

https://rclone.org/onedrive/#excessive-throttling-or-blocked-on-sharepoint

Also, Microsoft has a lot of reasons to throttle an account, including usage pattern change. So if you are starting to use it, it might be normal and might take some time for Microsoft to increase the limits for that account. Try the user-agent flag to see if it helps.

Hi FritVetBE,

The above are all good ideas, each based on slightly different interpretations of your current situation.

The point is that it is almost impossible to give good specific advice without knowing the characteristics of your data at both source and destination, other usage of your OneDrive, and prior activity the past 24 hours.

Here are some additional things to be aware of:

  • Throttling (also) happens at (both) instance and account level, so it may be triggered by other activities on your production system (e.g. hefty user activity)
  • Throttling seems to have a 24 hour memory, so it may also be caused by prior activity (e.g. prior load testing)
  • Creation/modification of many small files and folders will trigger throttling at low transfer rates (MiB/s). This is typical for folders used for software development e.g. my local Go and GitHub folders. This could be the situation you are seeing. Keep an eye on the number of transferred files and folders – not just the transfer speed in bytes.

I have tested OneDrive Personal throttling and found that throttling happens relatively quick if too many requests get queued up at OneDrive. My best advice is therefore to reduce both --checkers and --transfers and leave other tuning parameters at defaults.

In your case I would try keeping --transfers=2, adding --checkers=4 and removing --bwlimit, --onedrive-chunk-size, --fast-list, --tps-limit and --user-agent.

If this still throttles then reduce to --checkers=2 --transfers=1

If this still throttles then stop all activity for 24 hours and then try again.

Hey all!

Thanks everyone for posting some suggestings/more info to read on.

At this moment i'm going with the suggestion of Ole to modify the parameters on command.
I'll keep you guys updated on this issue :slight_smile:

1 Like

hello and welcome to the forum.

  • one option to reduce the number of transactions is to use a flag such as --max-age, tho there are some caveats.

  • not sure the logic of
    --checksum
    and
    --ignore-checksum

Thanks for assisting me on this issue. You are right, it doesn't make any sense. Not too sure how it slipped in, anyways i removed the parameter "--checksum". Now, rclone is able to sync around 200 files, up to 460MB before getting throttled, so this does seem relevant, thank you!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.