Is that the whole logfile? It's only a few seconds.
Nothing else is logged. That is the full log.
Is it doing any CPU / anything on the system? I'd kill it and restart as somethings seems like it is not running.
the command should start quickly, as normally with s3, rclone has to calculate the hash before the upload starts.
but you have disabled that.
perhaps update to latest stable
Did that almost 5 times in 3 different systems
Thanks, I will try again after the upgrade. It worked very well until last week. We have copied millions of files.
What is the size of the files in the bucket? Is it one big thing?
there are around 35 million files in source of various sizes, ranging from 15KB to 50MB.
So one directory with 35 million files?
if the source and dest are both s3, then. imho, should not use
--checksum, as per the docs
"If the source and destination are both S3 this is the recommended flag to use for maximum efficiency."
OP probably not even that far.
You get 1000 per chunk list per @ncw other post with AWS S3.
Assuming 1s per 1000 which I'm not sure how valid that is, it would take about 9.7 hours to get a listing of 35 million files.
not an exact comparison, but for a set of 1,000,000 files,
rclone sync /local wasabi: --dry-run took just 26 seconds
It's not even in the same ballpark.
OP has 35 times the files in the same directory and I don't think it's not a linear curve on getting files and it gets slower as it builds.
well, you asked
So one directory with 35 million files? and the OP has not answered yet.
just pointing out that scanning 1,000,000 files, on s3, can take just 26 seconds.
maybe aws s3 is super slow compared to wasabi.
Correct, one directory with 35million files.
The command ran for more than 7hrs, staying at 0/0 Bytes transferred. Should I wait longer than that for transfer to start?
to get a deeper look into what is going on,
--dump=headers --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file=rclone.log
and did you see my suggestion,
as per the rclone docs, if the source and dest are both s3, to use
yes, thank you!
I will re-try with the suggested flags, including checksum and post the results here.
And if you agave the time, for curiosity, let it run for like 10-12 hours.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.