Rclone sync stuck or too slow on 5 million files

What is the problem you are having with rclone?

I am trying to sync local directory with about 5 million files with remote S3 storage.

When I run rclone sync it starts something under the hood, elapsed time inrease, but no ETA and any transfers, it make take 8 hours and will be nothing. I tried to increase checkers and decrease max-backlog but it doesn’t help.

Run the command 'rclone version' and share the full output of the command.

rclone v1.71.1

  • os/version: rocky 8.9 (64 bit)
  • os/kernel: 4.18.0-513.18.1.el8_9.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.25.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Minio S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -vv --no-check-certificate --config /home/user/.rclone.conf --log-file /home/user/migrations/rclone.log --retries 1 --s3-upload-cutoff 500Mi --s3-upload-concurre
ncy 6 --s3-chunk-size 16Mi --s3-max-upload-parts 40000 --progress --transfers 20 --checkers 20 --max-backlog 10000 --ignore-errors --fast-list --copy-links . s3-prod:nextcloud-bucket-0/

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3-prod]

type = s3

provider = Minio

access_key_id = XXX

secret_access_key = XXX

endpoint = https://s3-prod.domain.ltd:9000

### Double check the config for sensitive info before posting publicly



A log from the command that you were trying to run with the -vv flag

2025/12/05 09:27:25 DEBUG : rclone: Version "v1.71.1" starting with parameters ["/home/user/rclone" "sync" "-vv" "--no-check-certificate" "--config" "/home/user/.rclone.conf" "--log-file" "/home/user/migrations/rclone.log" "--retries" "1" "--s3-upload-cutoff" "500Mi" "--s3-upload-concurrency" "6" "--s3-chunk-size" "16Mi" "--s3-max-upload-parts" "40000" "--progress" "--transfers" "20" "--checkers" "20" "--max-backlog" "10000" "--ignore-errors" "--fast-list" "--copy-links" "." "s3-prod:nextclouddisk-bucket-0/"]
2025/12/05 09:27:25 DEBUG : Creating backend with remote "."
2025/12/05 09:27:25 DEBUG : Using config file from "/home/user/.rclone.conf"
2025/12/05 09:27:25 DEBUG : local: detected overridden config - adding "{12rtk}" suffix to name
2025/12/05 09:27:25 DEBUG : fs cache: renaming cache item "." to be canonical "local{12rtk}:/gl-bricks/nextclouddisk-vol/brick0/nextcloud-data/migrations/04.12"
2025/12/05 09:27:25 DEBUG : Creating backend with remote "s3-prod:nextclouddisk-bucket-0/"
2025/12/05 09:27:25 DEBUG : s3-prod: detected overridden config - adding "{30OaO}" suffix to name
2025/12/05 09:27:25 DEBUG : fs cache: renaming cache item "s3-prod:nextclouddisk-bucket-0/" to be canonical "s3-prod{30OaO}:nextclouddisk-bucket-0"

Try to update it to the latest version. I doubt it will make difference but who knows.

Based on your description rclone is reading list of files from your S3 remote. And it takes time for 5 million objects. It is relatively fast for providers like AWS and fast Internet connection but in your case who knows what are limits of this minio instance. Also rclone should not have any issues even with much larger sets. 5m is big - but nothing unusual.

To see in more details what is going on (and how fast) add --dump headers,responsesflag. You should see that rclone is not “frozen” but does its job as fast as your system allows. Or it will give us some clues why it is so slow.

welcome to the forum,

that has been discussed a number of times in the forum.

https://forum.rclone.org/t/recommendations-for-using-rclone-with-a-minio-10m-files/14472/3

https://forum.rclone.org/t/how-to-sync-s3-with-millions-of-files-at-root/36703/4

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.