Rclone file evaluation order

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

Parallelism efficiency

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.2

  • os/version: debian rodete (64 bit)

  • os/kernel: 6.1.20-2rodete1-amd64 (x86_64)

  • os/type: linux

  • os/arch: amd64

  • go/version: go1.18.6

  • go/linking: static

  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads

Which cloud storage system are you using? (eg Google Drive)

S3, GCS, File

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy s3://bucket1 gcs://bucket1


My main concern is efficiency, if I use multiple rclone instances. I know that within an instance parallelism can be configured with --checkers flag, and --transfers flag. But if i run the command on two different machines for the same 2 sources and destinations, would the evaluation order cause the work to be doubled or would it halve the time ? I understand that before each file transfer there will be a check on whether or not to move the file, but would the evaluation order be approximately the same if i set transfer thread count to say 200 ?

I would say that with optimal configuration you are looking into doubling the time needed.

There are always some bottlenecks e.g. ISP upload speed. If it is let say 100 MB/sec - one rclone instance can use it all. But when you run two it will result in 50 MB/sec each.

Also not sure if setting transfers count to 200 is good idea - looks way to high for me. What you want to achieve by this?

Setting a large number for large number of small files. Say 1 million 1 kb files.

Might help but for many remotes you can hit rate limits e.g. for Google Drive it is only ~10 API calls per second. If you exceed it massively never ending retries will slow down everything to halt.

So the best way is test with --vv option before.

With so many small files you will most likely suffer very slow performance. If your goal is backup using backup tools like restic might be better choice as files will be packed into much bigger chunks.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.