What should I do when rate limited?

I'm trying to sync two separate Team Drives owned by different organization and different accounts. I got it to do server side copies, but the source is 3.3TB so I can't do it all in one day due to the daily upload limit of 750GB.

So the question is, when it hits 750GB, should I Ctrl + C it and perform the same command 24 hours later, or should I let the command run and let rclone keep retrying until 24 hours later?

If I stop it and restart it tomorrow, will rclone resync the 750GB that had been transferred today (and thus never completing the whole copy because it keeps copying the same 750GB over and over again)?

The two configs are both crypted, but I'm rclone sync -ing the underlying remotes. (source: gdrive -> gcache -> gcrypt; destination: mdrive -> mcache -> mcrypt; command: rclone sync gdrive: mdrive: )

the sync command, each time it is run, will NOT keep copying the same 750GB over and over again.

"Sync the source to the destination, changing the destination only. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary."
"Since this can cause data loss, test first with the --dry-run"

make sure you are using the latest stable version of rclone.

check these

  1. https://rclone.org/drive/#drive-server-side-across-configs
  2. https://rclone.org/drive/#drive-stop-on-upload-limit
  3. https://rclone.org/docs/#n-dry-run

can you share the command you are using?

My command: rclone sync gdrive: mdrive: -P -vv --drive-server-side-across-configs=true --tpslimit=10 --bwlimit=8.5M

i suggest that you read the links i shared with you and tweak your command.

write a simple batch file:

  1. rclone sync gdrive: mdrive: -P -vv --drive-server-side-across-configs=true --tpslimit=10 --drive-stop-on-upload-limit=true
  2. sleep for a period of time
  3. goto step 1.

@thestigma, ...

Use --drive-stop-on-upload-limit to make the script exit once it has hit the limit.
Then just run that script a few times a day (can easily be scheduled to be automatic).
It would work to let rclone just run (if you ran it again at the end to clean up any errored files), but it would unnecessarily hammer the server for most of the day after reaching the limit.

This is the best way to do it. It means you dont have to decrypt-recrypt, and it also means you can server-side transfer the data if you enable --drive-server-side-across-configs (or put server_side_across_configs = true in the Gdrive config in rclone.conf)

No. rclone checks everything and only transfers if the file is missing on the destination or is an older version of the file. (unless you force it to do something else with flags).

Yup... exactly this. It will be super fast and efficient, but also be nice to the server.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.