Simplify concurrency settings

Is there ever a good reason to consider chunk concurrency separately from file concurrency?

I would propose to abandon the separation of --transfers and --multi-thread-streams / --[backend]-upload-concurrency settings.

Instead, this could be simplified into 3 concurrency settings:

  • --upload-concurrency 4 - max concurrent streams for upload
  • --download-concurrency 10 - max concurrent streams for download
  • --mixed-concurrency 2,8 - when upload and download is happening at the same time — 2 streams upload, 8 streams download

When rclone has available upload or download streams, it will fill them with either this file's chunk or next file's chunk, no matter what's next.

Chunk settings will still govern how to split large files, so that wouldn't change.

What do you think? Would this simplify code maintenance, user understanding, and RAM usage math?

The b2 backend (which was the first backend to have chunked uploading) did work exactly like this.

It uses --transfers to control the number of uploading streams and each chunk being uploaded took one of those streams.

We could do something like this I think.

  • No
  • Maybe
  • Yes


Regarding code maintenance, I was hoping a few conditions would be gone, no longer having to check whether we can upload more files vs. more chunks in parallel? Instead, have one streamlined process. Maybe there's complexity elsewhere, but was hoping this would get simpler.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.