Wasabi: multi part uploads

Hi,

I am new to using rclone and testing uploads to Wasabi with the intention of copying a few very large files; 8 files total, smallest 10GB, largest 1.7TB. I have a 100mb symmetric, internet circuit in the UK.

My test file was 18GB, which took just under 18hr at average speed of 333KB/s, no where near maxing out my bandwidth.

My command I used was

rclone --verbose --transfers 1 “source” “destination”

I appreciate --transfers 1 will not be as efficient as --transfers 4 (or more) but if I want files to upload sequentially, how can I speed this up? I understand S3 compatible storage can use --s3-upload-concurrency as mentioned here https://rclone.org/s3/

Would my use of it therefore be

rclone --verbose --transfers 1 --s3-upload-concurrency 4 “source” “destination” ???

would I also benefit from increasing the chunk size? What would you recommend? rclone is on a system running Windows 7 with 8GB ram, current idle usage is 2.5GB, so have around 5.5GB free. I can add more memory if it is going to make a significant difference.

Thanks in advance.

Ok, I have answered my own question with a bit more (unscientific & uncontrolled) testing

My sample was 5 test files, all exactly 1,000,000 KB each, or 4.78GB total (according to rclone)

rclone --transfers 1 = 4hr, 35m ~333 KB/s
rclone --transfers 1 --concurrent-uploads 4 = 2hr, 16m ~700 KB/s
rclone --transfers 1 --concurrent-uploads 4 --s3-chunk-size 16M = 2hr, 21m ~680 KB/s
rclone --transfers 1 --concurrent-uploads 16 --s3-chunk-size 16M = 0hr, 37m ~2450 KB/s
rclone --transfers 4 --concurrent-uploads 16 --s3-chunk-size 16M = 0hr, 15m ~6000 KB/s

So, multiple transfers with multiple uploads works better for large files to Wasabi. Chunk size didn’t make much of a difference.

Interesting that my last test actually completed the first 4 of 5 files at 100% in 5 minutes, but rclone took a further 2/3 minutes confirming the upload before moving onto file 5, which, as the last file, took another 4 minutes to reach 100% and another 3 minutes to confirm, before rclone returned to the command prompt.

Yes that is correct.

I think increasing the chunk size will help a lot. The chunks are buffered in memory, so the total memory used will be something like (--transfers)x(--s3-upload-concurrency)x(--s3-chunk-size). I’d probably try --s3-chunk-size 128M which will use about 0.5G of memory with --transfers 1 and --s3-upload-concurrency 4.

You could also try not using chunked upload which will just transfer a single thing bug as fast as possible, set --s3-chunk-size 1000T. This doesn’t buffer stuff in memory. If you set --transfers higher this might run at better throughput.