Slow upload speed for 10gbps connection

What is the problem you are having with rclone?
Trying to upload 2GB files to my Cloudflare R2 bucket, but upload speeds won't go more than 20MBps, i'm using a server with 10gbps network (nonshared), and can't upload faster than 20MBPS.

Rclone version;
rclone v1.62.0-beta.6672.98fa93f6d

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0-90-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using?
Cloudflare R2

Command being used;
rclone copy database.sql r2:mvs/ -P

hello and welcome to the forum,

with S3 providers, before rclone uploads a file, rclone will calculate the MD5 of the source file.
rclone saves that MD5 as metadata.

only after that MD5 is calculated, then rclone starts to upload the file.
and that extra time affects how rclone calculates overall speed.

for testing, might try --ignore-checksum

thanks for welcome,
well testing with [--ignore-checksum] the speed increased from 21MB/s to 30MB/s, but i think it still not as i expected..

try tweaking --transfers, --checkers
and settings for multipart uploads

might add --fast-list

Try setting --s3-upload-cutoff to bigger than the size of the file to disable multipart upload.

If that doesn't work experiment with chunk size and concurrency for the multipart upload.

tried using --fast-list but didn't improved at all, and researching a bit about S3 i decided to use [--s3-upload-concurrency 32]
and it increased my upload from 30MB/s up to 70 MB/s

but reading the other answer below from @ncw i decided to look about s3 specific flags, and i found this following;

(Increasing --s3-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.)

then my command become this;
--ignore-checksum --fast-list --s3-upload-concurrency 32 --s3-chunk-size 128000

and now i've increased my speed from 70MB/s up to 110MB/s (was max i could see for a 2GB file before it complete upload)
and for a 10GB file for example, speed reached 250MB/s

i'm satisfied with these speeds now :slight_smile:
thank you so much @ncw and @asdffdsa

Great - glad that helped.

You can use size suffixes for these (if you don't then they are in k) so --s3-chunk-size 128M is slightly easier to read (if not precisely the same value!)

Note that there was a problem with concurrent uploads and R2 - they didn't allow too many of them, but it appears that is fixed now if you are getting good performance with concurrency 32.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.