Transfer optimization tips (Google drive -> S3)

What is the problem you are having with rclone?

My transfers are working but they are huge. Currently about 500TB Looking for tips and suggestions on ways to speed them up.

currently using
rclone move GoogleDrive:/ Wasabi:/ --progress

I'm setting up direct peering with both providers for better network traffic. Would more memory help? more cores? removing the progress?

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.0

  • os/version: ubuntu 22.04 (64 bit)

  • os/kernel: 5.15.0-84-generic (x86_64)

  • os/type: linux

  • os/arch: amd64

  • go/version: go1.21.1

  • go/linking: static

  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

  rclone move GoogleDrive:/ Wasabi:/ --progress

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

type = drive
client_id = XXX
client_secret = XXX
scope = drive
token = XXX
team_drive = 

type = s3
provider = Wasabi
access_key_id = XXX
secret_access_key = XXX
endpoint =
acl = authenticated-read
### Double check the config for sensitive info before posting publicly

welcome to the forum,

odds are the limit is network speed and i believe that there is an undocumented gdrive hard limit of 10TiB/day
what is the output of a speedtest, and what speeds are you getting with rclone?
and what is the mix of files sizes, mostly large, small, ...

can tweak --transfers, --checkers, chunk size - all of which use more machine resources.
check out --no-check-dest, --no-traverse, --checksum

Do what asdffdsa suggest.

But a question, is it not cheaper to move these 500TB to Backblaze B2 or iDrive E2?

Thank you for the guidance. Yes other providers would be cheeper but we are utilizing the Media Asset Management features of Wasabi for our video archive.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.