Prepare transfer is slowly

My directory contains tens of millions of small files. It took 20 minutes before the synchronization even started, and I did not use the check-first option. It seems that the traversal process took a very long time.

rclone config:
[oss]
type = s3
provider = Alibaba
access_key_id = xxx
secret_access_key = xxx
endpoint = oss-ap-southeast-1.aliyuncs.com
#acl = private
storage_class = STANDARD

#bucket_acl = public-read
#upload_cutoff = 1Ki
[s3]
type = s3
provider = AWS
access_key_id = xx
secret_access_key = xxx
region = ap-southeast-1
endpoint = s3.ap-southeast-1.amazonaws.com
#acl = private
storage_class = STANDARD

rclone commond

rclone copt oss:xxx s3:xxx --transfers 128 --checkers 256 --fast-list --size-only --use-server-modtime --buffer-size 1M --s3-upload-concurrency 32 --s3-chunk-size 1M --log-level INFO --progress

make sure to use latest version of rclone.


have you read? How to sync S3 with millions of files at root

The strange thing is that in other scenarios, when there were about 500M of objects in my bucket, the job started very quickly. I didn’t add any extra parameters, and I didn’t encounter any process where it waited for scanning object metadata.

Do you mean that I should first traverse the directory in this way, and then proceed with the transfer?
rclone lsf -R source:bucket | sort > source-sorted
rclone lsf -R dest:bucket | sort > dest-sorted
comm -23 source-sorted dest-sorted > to-transfer
comm -12 source-sorted dest-sorted > to-delete
rclone copy --files-from to-transfer --no-traverse source:bucket dest:bucket
rclone delete --files-from to-delete --no-traverse dest:bucket

just follow the instructions of the link i shared.

V1.70+ has essentially this logic built in