Sync task takes 24 hours to run against a AWS S3 bucket to Backblaze B2 Cloud Storage. The source S3 bucket is 4TB large with lots and lots of small files (approx 16 million). Any additional flags or strategy to speed up the sync so it does not take 24 hours?
Running on a dedicated EC2 m5.large instance so 2 vCPUs and 8GB of memory so lot's of resources to throw at it.
What is your rclone version (output from rclone version)
rclone v1.56.1
os/version: ubuntu 18.04 (64 bit)
os/kernel: 5.4.0-1056-aws (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.16.8
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
AWS S3 and Backblaze B2 Cloud Storage
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Are you using the native b2 protocol or using their s3 gateway?
For this case I'd use the s3 gateway because then they have compatible checksums (both MD5).
Assuing s3 -> s3 then use the --checksum flag - this will speed things up greatly. If you are using s3 -> b2 then use --size-only which isn't a perfect solution but will speed things up an equal amount.
Assuming you've got enough memory, then use --fast-list. This will buffer all the objects in memory first which will take quite a few GB of memory. That will speed things up too.