My S3 bucket is pretty big around d 1.2Tb, I'm trying to move it to other S3 Provider around 966Gb has been transferred but the process got interrupted now if I run it, the process keep checking those objects and there are more than 4 million objects in bucket.
So how can make this checking process much faster I tried increasing checkers to 200 and also tried to match size only but no success.
What is your rclone version (output from rclone version)
rclone v1.52.1 - os/arch: linux/amd64
go version: go1.14.4
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Debian
Which cloud storage system are you using? (eg Google Drive)
Wasabi to scaleway
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone sync Wasabi:bucket name Scaleway:bucket name --progress --transfers 200 --checkers 200
I used checksum but it slower than checkers and it hasn't using parreral process, in 12 hours it just checked around 700k objects on the other hand checkers easily can cross 1 million.