What is the problem you are having with rclone?
When performing a sizeable incremental sync, the synchronization slows over time. Stopping the sync and starting over again instantly runs fast again.
It appears to be due to automatic throttling.
What I can't explain is why is it immediately fast again for a while if I stop the first execution and start a new one. Is the destination cloud service more permissive at first and then increases throttling over time?
Run the command 'rclone version' and share the full output of the command.
rclone v1.63.1
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-78-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.6
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
I have seen this error on different combinations of source and destination cloud storage, the current issue is being encountered with sync from Microsoft OneDrive -> IDrive e2
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
clone sync --cutoff-mode soft --log-level INFO --log-file ${logfile} OneDrive:/ IDrive:/
The rclone config contents with secrets removed.
[OneDrive]
type = onedrive
token = {"access_token":"-redacted-","token_type":"Bearer","refresh_token":"-redacted-","expiry":"2023-08-09T12:17:53.993506934-05:00"}
drive_id = -redacted-
drive_type = business
[IDrive]
type = s3
provider = IDrive
access_key_id = -redacted-
secret_access_key = -redacted-
acl = bucket-owner-full-control
endpoint = v3l8.da.idrivee2-34.com
A log from the command with the -vv
flag
The log is extremely long, after only one minute there are over 8000 lines.
But what seems to be of the most relevance is the following.
2023/08/09 11:49:00 DEBUG : Too many requests. Trying again in 322 seconds.
2023/08/09 11:49:00 DEBUG : pacer: low level retry 2/10 (error activityLimitReached: throttledRequest: The request has been throttled)
2023/08/09 11:49:00 DEBUG : pacer: Rate limited, increasing sleep to 5m22s
What seems odd is that the -vv logging doesn't indicate if the 'too many requests' is coming from the source or the destination. I'm assuming the destination because there is a backlog of files to be copied, but the copying is what has halted.