Difference between rclone sync for s3 and aws s3 sync

S3 upload performance is always trade off between speed and resources usage (mainly RAM). I guess aws S3 sync has more aggressive defaults than rclone.

You can control upload performace using --s3-upload-concurrency and --s3-chunk-size flags.

As per docs:

Increasing --s3-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory.

For single 10 GB file you can try below "high performance" commands and you will see increasing performance:

2.5GB of ram used to upload:
rclone copy --progress --s3-upload-concurrency 40 --s3-chunk-size 64M 10gb.zip remote:bucket

5GB of ram used to upload:
rclone copy --progress --s3-upload-concurrency 80 --s3-chunk-size 64M 10gb.zip remote:bucket

10GB of ram used to upload:
rclone copy --progress --s3-upload-concurrency 160 --s3-chunk-size 64M 10gb.zip remote:bucket

If you would use above settings for sync with 10 transfers it will use 10x more RAM

Rclone default chunk-size is 5 MB and upload-concurrency is 4 - so it uses 20MB of RAM. Defaults have to take into account that people run it sometimes on very limited systems like raspberry pi etc. but you can easily change these settings using mentioned flags e.g. for your test:

rclone sync --transfers=10 src s3:dst_bucket --s3-upload-concurrency 8 --s3-chunk-size 16M

it will use about 1.2GB of RAM during transfer.

2 Likes