Migration from Google Cloud Storage to S3 bucket

Hey Team,

Happy Friday!

We want to migrate a bucket from gcs to s3 and this is the output from rclone copy.

ransferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 1896000
Elapsed time:       1m0.0s

2025/07/04 08:46:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 3840000
Elapsed time:       2m0.0s

2025/07/04 08:47:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 5749000
Elapsed time:       3m0.0s

2025/07/04 08:48:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 7587000
Elapsed time:       4m0.0s

2025/07/04 08:49:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 9487000
Elapsed time:       5m0.0s

2025/07/04 08:50:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 11380000
Elapsed time:       6m0.0s

2025/07/04 08:51:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 13236000
Elapsed time:       7m0.0s

Do you know why is Listed first and not directly transfer / checks the files?

There are around 250Tb of data to migrate.

Thanks,
Mihnea

You do not provide much details so hard to give you any advice but if you have many objects it can take some time before transfers start. For millions of objects it can easily be even hours. Objects size is irrelevant for checking.

If all objects are in one directory then they have to be listed all first. Your transfer lists about 200k per minute so you can easily work out worst case.

Make sure you are using the latest rclone version and give it time

Thank you @kapitainsky.

There are millions of objects (not dirs which is good I think).

This is the command that I'm using:

rclone -vvv --transfers 192 --log-file=/data/rclone-output.txt --fast-list copy gcs:gcp-bucket s3:aws-bucket

Try to increase checkers as well. Default is 8.

What value do you recommend? I'm using this instance type: c6i.8xlarge.

Also, this is the thing rclone is not checking the files only listed from what I see in the logs.

2025/07/04 09:16:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 56755000
Elapsed time:      32m0.0s

2025/07/04 09:17:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 58474000
Elapsed time:      33m0.0s

2025/07/04 09:18:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 60242000
Elapsed time:      34m0.0s

2025/07/04 09:19:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 61942000
Elapsed time:      35m0.0s

2025/07/04 09:20:29 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 63636000
Elapsed time:      36m0.0s

Actually it is 2 millions per minute.

Re checkers I would try increasing them as far as you see gains. Too many can be counterproductive. Optimal value depends on many factors like your computer and network.

If they are all in one directory they have to be all listed first before any transfer starts.

Got it, thank you @kapitainsky.