I have been trying for a few days now to copy millions of files from Rackspace CloudFiles storage into AWS S3. One of the containers that I am currently trying to copy from has around 10Million files within it.
I started running a sync 2 hours ago now and I'm still at 0 bytes transferred. Is there a way to speed up this process, or is something wrong. I have checked both the source and destination have connection, by copying over a small directory from the source to the destination, which worked fine. Is this a limitation with the number of files?
This is being run on an AWS EC2 Linux Instance, with RCLone Version 1.55.1
Trying to Migrate from Rackspace CloudFiles into Amazon S3
The command run is: rclone sync -i -P -vv --fast-list rackspace:src-dir remote:dest-dir
try adding --dump=bodies --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file rclone.log
run the command for a couple of minutes
kill the command
post the rclone.log file
Quick question before I post the log (it's quite large) Does RClone first need to read through all of the files to compile a list / get information on progress?
I attempted this on a smaller dataset and it seemed to work perfectly fine, could it be that rclone is still calculating the 10,000,000 files?