[Question] Limit of concurrent jobs in rcd mode

What is the problem you are having with rclone?

Is there an internal queue or max parallel jobs limit in rclone, when rclone is running in rcd mode?

I've found the Transfers option, however it does not seem to be about actual jobs limits, but more files when you are copying or syncing directories.

By queue I mean that for example I start 5 jobs in async mode, and 4 jobs start immediately, while 5th job is paused/stalled until one of the jobs is finished.

Or generally the rclone will run all given jobs as goroutines as fast as possible?

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.1

  • os/version: ubuntu 18.04 (64 bit)
  • os/kernel: 5.4.0-62-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.18.5
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Amazon S3 for now, but GCP and Azure storages for later

The command you were trying to run (eg rclone copy /tmp remote:tmp)

RC command operations/copyfile

{
    "srcFs": "test_s3_src:",
    "srcRemote": "bucket/a.bin",
    "dstFs": "test_s3_dst:",
    "dstRemote": "bucket/a.bin",
    "_async": true
}

The rclone config contents with secrets removed.

A log from the command with the -vv flag

You are right the --transfers is per transfer, not a global limit.

This came up recently in Restrict Transfers per Remote - #5 by bend

@nikitav this wouldn't be too tricky to implement and I'm happy to talk people through it. First thing is it needs an issue, second someone to do it - might you want to do it?

Hey @ncw, yea, I'll open it.

Done: Add an option for limiting concurrent jobs in rcd mode · Issue #6379 · rclone/rclone · GitHub

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.