Migrating big S3 Bucket using Rclone

What is the problem you are having with rclone?

My S3 bucket is pretty big around d 1.2Tb, I'm trying to move it to other S3 Provider around 966Gb has been transferred but the process got interrupted now if I run it, the process keep checking those objects and there are more than 4 million objects in bucket.

So how can make this checking process much faster I tried increasing checkers to 200 and also tried to match size only but no success.

What is your rclone version (output from rclone version)

rclone v1.52.1 - os/arch: linux/amd64

  • go version: go1.14.4

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Debian

Which cloud storage system are you using? (eg Google Drive)

Wasabi to scaleway

The command you were trying to run (eg rclone copy /tmp remote:tmp)

 rclone sync Wasabi:bucket name Scaleway:bucket name --progress --transfers 200 --checkers 200

Use --checksum - that will be much quicker as it doesn't have to read the modtime for each individual object.

If you've got enough memory then --fast-list will speed things up too.

1 Like

So I should only use - - checksum flag and remove others? And can I increase pareral process as I'm doing with checkers?

And how much memory you talking about? I have around 2 GB of memory available out of 6GB

Just add --checksum to the command line you have.

You need about 1k per object so 4 million objects will be ~ 4G so you don't have enough memory I'd say.

1 Like

I used checksum but it slower than checkers and it hasn't using parreral process, in 12 hours it just checked around 700k objects on the other hand checkers easily can cross 1 million.

You need --checksum and --checkers for best speed

rclone sync Wasabi:bucket name Scaleway:bucket name --progress --transfers 200 --checkers 200 --checksum

I'm using this command but seems like it's using default parreral process

That might be too many checkers and transfers. Try 32 for each. Also try --fast-list - you might have enough memory...

1 Like

Will try, if process get interrupted. I crossed 1.5 million checks hope it reaches near 3 million by tommorow. Fter

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.