I had the need to rclone some very large files over the last few days (250 gb or so) to both b2 and s3 - it worked fine just had a probabably obvious question:
I assume that rclone is doing some kind of chunking/checksum calculation? on a rather beefy server it takes 45-60 minutes before the transfers began
not sure if any of the chunk/buffer settings would be helpful in speeding up a transfer of this type ?
For s3 it is calculating the md5sum of the file before upload.
You can disable this with s3
--s3-disable-checksum Don't store MD5 checksum with object metadata
If you do this it will mean there is no record of the original hash stored on s3 so rclone won’t be able to check the integrity of the file if you download it.
hi, just to revisit this, testing with a 140 gig file from windows with 1.46 to b2 , not only is it def doing “something” for a long time, it isn’t writing anything to the specific log file while it’s doing (10 minutes and counting so far)