How to maximize single file multipart upload speed to Cloudflare R2 (S3 compatible)

I did some brief testing to see what settings would get optimal upload speeds with RClone, and the following worked quite well, getting me over 1600 MB/s (megabytes, not megabits) with peaks over 2 GB/s.

rclone copy -P --transfers 300 --s3-upload-concurrency 300 --s3-chunk-size 50M --no-check-dest --ignore-checksum --s3-disable-checksum ./testfile r2:mybucket
Transferred: 30 GiB / 30 GiB, 100%, 1.648 GiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 18.3s

Aside from increasing the --s3-upload-concurrency, --transfers and the chunk size, a critical setting is the --s3-disable-checksum parameter, since the calculation of the checksum seems to be single threaded, which significantly limits throughput.

I didn't do enough testing to verify if the values for these parameters are truly optimal, but I hope this can be of help regardless. If you find better settings, feel free to share them! Also, The --ignore-checksum parameter might not affect performance that much. (edit: it seems that the --ignore-checksum flag does nothing for multipart uploads anyways)

Testing was done on a GCP n2d-standard-48 with tier 1 networking enabled, which gives you 25Gbit egress capabilities.

hello and welcome to the forum,

thanks for the guide,

without that, cannot know if the upload was corrupted during transit.

--s3-disable-checksum
ditto

Correct me if I'm wrong, but the documentation seems to imply that the --s3-disable-checksum doesn't actually affect any checks on in transit data:. s3-disable-checksum docs

If I'm correctly interpreting the documentation, all that does is calculate the checksum of the file locally, and store it in the metadata.

In other words, I think that the following scenario is possible regardless of setting the --s3-disable-checksum option:

rclone calculates the checksum -> uploads the file -> corruption happens in upload -> corrupt upload completes successfully with the checksum stored in the custom metadata. So there's no actual integrity check being disabled.

That option does matter though if you're planning on doing an integrity check the next time you download the file, at which point you can indeed use that checksum in the metadata for comparison. So if that's important to you, you can leave the --s3-upload-concurrency flag out. Without that flag I still got about 700MB/s with these settings, which is still respectable.

As for your comment about the --ignore-checksum flag potentially affecting in transit corruption checks, I looked into it and it seems like that flag has no effect for multipart uploads because rclone already ignores the etag whenever it doesn't look like an md5sum, which is the case for multipart uploads to most S3 compatible providers. I think that means that the post upload checksum verification is skipped regardless.

Source for the etag being ignored when it doesn't look like an md5 hash: How to set checksum cutoff for custom s3-compatible storage? - #4 by ncw

tl;dr, in the end it is a choice based on use-case, what level of paranoia you want.
if i disable a safety feature, and the blank hits the fan, i get fired.

i see that you clearly understand the issue and potential outcomes.

yes, i agree with that.

yes, i agree with that.

yes, i agree with that