Does rclone get slower as the destination directory gets larger?

I’m using a combination of rclone and icrontab to sync files that get uploaded to an FTP server to an S3 bucket.

I’ve got two S3 buckets that I’m syncing to, one of which i significantly larger than the other.

A move command targeting the larger bucket takes 13-15 seconds while the same command targeting the smaller bucket takes less than a second. What is rclone doing that’s making this happen? Is it inspecting all the files that are in the destination directory prior to the move? Is there any way to skip this?

Large bucket (25,555 files, ~ 480mb):

ubuntu@ip-172-31-92-82:~$ rclone move -vv /home/c/ftp/files c:bh-images-c
2018/08/01 01:28:43 DEBUG : rclone: Version "v1.42" starting with parameters ["rclone" "move" "-vv" "/home/c/ftp/files" "c:bh-images-c"]
2018/08/01 01:28:43 DEBUG : Using config file from "/home/ubuntu/.config/rclone/rclone.conf"
2018/08/01 01:28:56 INFO  : S3 bucket bh-images-c: Waiting for checks to finish
2018/08/01 01:28:56 INFO  : S3 bucket bh-images-c: Waiting for transfers to finish
2018/08/01 01:28:56 INFO  : 
Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                 0
Transferred:            0
Elapsed time:       13.5s

2018/08/01 01:28:56 DEBUG : 6 go routines active
2018/08/01 01:28:56 DEBUG : rclone: Version "v1.42" finishing with parameters ["rclone" "move" "-vv" "/home/c/ftp/files" "c:bh-images-c"]

Small bucket (339 files, 90mb):

ubuntu@ip-172-31-92-82:~$ rclone move -vv /home/k/ftp/files k:bh-images-k
2018/08/01 01:29:10 DEBUG : rclone: Version "v1.42" starting with parameters ["rclone" "move" "-vv" "/home/k/ftp/files" "k:bh-images-k"]
2018/08/01 01:29:10 DEBUG : Using config file from "/home/ubuntu/.config/rclone/rclone.conf"
2018/08/01 01:29:10 INFO  : S3 bucket bh-images-k: Waiting for checks to finish
2018/08/01 01:29:10 INFO  : S3 bucket bh-images-k: Waiting for transfers to finish
2018/08/01 01:29:10 INFO  : 
Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                 0
Transferred:            0
Elapsed time:       200ms

2018/08/01 01:29:10 DEBUG : 6 go routines active
2018/08/01 01:29:10 DEBUG : rclone: Version "v1.42" finishing with parameters ["rclone" "move" "-vv" "/home/k/ftp/files" "k:bh-images-k"]

It will be listing any directories that are in the source so it can see if the source already exists in the destination. Depending on your directory structure and what you are transferring that could be a lot of listing.

You could try --fast-list which will use more memory, but it uses a single S3 listing call to list everything all at once at the start.

If you are updating files, it might also be worth using --size-only or --checksum as on S3 reading the modification time takes another HTTP transaction.