Question about VERY large files (250 gb)

#1

I had the need to rclone some very large files over the last few days (250 gb or so) to both b2 and s3 - it worked fine just had a probabably obvious question:

I assume that rclone is doing some kind of chunking/checksum calculation? on a rather beefy server it takes 45-60 minutes before the transfers began

not sure if any of the chunk/buffer settings would be helpful in speeding up a transfer of this type ?

thanks

0 Likes

#2

For s3 it is calculating the md5sum of the file before upload.

You can disable this with s3

  --s3-disable-checksum                Don't store MD5 checksum with object metadata

If you do this it will mean there is no record of the original hash stored on s3 so rclone won’t be able to check the integrity of the file if you download it.

There shouldn’t be a pause for b2 though.

0 Likes

#3

hi, just to revisit this, testing with a 140 gig file from windows with 1.46 to b2 , not only is it def doing “something” for a long time, it isn’t writing anything to the specific log file while it’s doing (10 minutes and counting so far)

C:\rclone>rclone copy M:\file.dat b2:/b2bucket/ --transfers 30 --size-only --ignore-checksum --checkers 30 -vvv --log-file b3.txt

0 Likes

#4

It will be calculating the sha1 of the file before uploading.

You can disable this with --b2-disable-checksum if you don’t mind big files not having sha1.

I was wrong when I said that!

1 Like