S3 concurrency --multi-thread-streams vs --s3-upload-concurrency

What is the problem you are having with rclone?

I'm confused about the s3 concurrency switches. With B2 I had to use --multi-thread-streams for fast uploading of big files. With s3 however I'm reading that I should be using --s3-upload-concurrency. What is right?

Also, what does it mean that

Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory. Single part uploads to not use extra memory.

If I want to upload/download only big files, should I set transfers=1?

  --transfers=2 \
  --s3-upload-concurrency=16 \
  --s3-chunk-size=32M \

Also, does --s3-upload-concurrency work for download as well? Or is that the case when --multi-thread-streams is used? It looks like that from testing but I couldn't find anything in the docs.

Also, does --s3-chunk-size has any effect when downloading, or the chunk sizes are always exactly the same as they were when uploading?

Which cloud storage system are you using? (eg Google Drive)

s3 / Cloudflare

hi,

tl;dr, best to start off using rclone defaults and no extra flags, to establish baseline performace.
if that does not saturate your internet connection, then can start to tweak flags and values.

no, only for uploads
Concurrency for multipart uploads

i use the default, no extra flags; with s3 remotes, aws and wasabi.
can easily saturate my 1Gbps internet connection, uploads and downloads.

for downloads to local
When using multi thread downloads
When downloading files

I've experimented a lot and I choose this at the end.

Upload (here a bigger chunk-size can potentially save money as less requests > less money).

--transfers=2 \
--s3-upload-concurrency=16 \
--s3-chunk-size=32M 

Download

--transfers=2 \
--multi-thread-streams=16 \

What's confusing for me is that for B2 I had to use a totally different setting for uploading:

--transfers=8
--multi-thread-streams=8
1 Like