I am new to using rclone and testing uploads to Wasabi with the intention of copying a few very large files; 8 files total, smallest 10GB, largest 1.7TB. I have a 100mb symmetric, internet circuit in the UK.
My test file was 18GB, which took just under 18hr at average speed of 333KB/s, no where near maxing out my bandwidth.
I appreciate --transfers 1 will not be as efficient as --transfers 4 (or more) but if I want files to upload sequentially, how can I speed this up? I understand S3 compatible storage can use --s3-upload-concurrency as mentioned here https://rclone.org/s3/
would I also benefit from increasing the chunk size? What would you recommend? rclone is on a system running Windows 7 with 8GB ram, current idle usage is 2.5GB, so have around 5.5GB free. I can add more memory if it is going to make a significant difference.
So, multiple transfers with multiple uploads works better for large files to Wasabi. Chunk size didn’t make much of a difference.
Interesting that my last test actually completed the first 4 of 5 files at 100% in 5 minutes, but rclone took a further 2/3 minutes confirming the upload before moving onto file 5, which, as the last file, took another 4 minutes to reach 100% and another 3 minutes to confirm, before rclone returned to the command prompt.
I think increasing the chunk size will help a lot. The chunks are buffered in memory, so the total memory used will be something like (--transfers)x(--s3-upload-concurrency)x(--s3-chunk-size). I’d probably try --s3-chunk-size 128M which will use about 0.5G of memory with --transfers 1 and --s3-upload-concurrency 4.
You could also try not using chunked upload which will just transfer a single thing bug as fast as possible, set --s3-chunk-size 1000T. This doesn’t buffer stuff in memory. If you set --transfers higher this might run at better throughput.