Question about VERY large files (250 gb)


I had the need to rclone some very large files over the last few days (250 gb or so) to both b2 and s3 - it worked fine just had a probabably obvious question:

I assume that rclone is doing some kind of chunking/checksum calculation? on a rather beefy server it takes 45-60 minutes before the transfers began

not sure if any of the chunk/buffer settings would be helpful in speeding up a transfer of this type ?



For s3 it is calculating the md5sum of the file before upload.

You can disable this with s3

  --s3-disable-checksum                Don't store MD5 checksum with object metadata

If you do this it will mean there is no record of the original hash stored on s3 so rclone won’t be able to check the integrity of the file if you download it.

There shouldn’t be a pause for b2 though.