Optimal --transfers parameter for BackBlaze

What is the problem you are having with rclone?

I want to optimize (maximize) my upload and download speed with the BackBlaze B2 cloud.

I read on the BackBlaze guide that the --transfers 4 default value is most likely not enough for most systems, and that it should be at least 32 or even higher.

So I would like to find a method for optimizing this value.

I have 1 Gbps fiber internet, and latency is really low (20ms) to my ISP.
Also have 32 GB or RAM, which I guess also play a role.

i tried with --transfers 1000 and my computer slowed down to a halt, so I guess that is too much, but I lean on not having enough RAM and thus having to use my SSD which is significantly slower than RAM, rather than clogging the 1 Gbps network card.

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.0
- os/version: Microsoft Windows 11 Pro 22H2 (64 bit)
- os/kernel: 10.0.22621.674 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.19.2
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

BackBlaze B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Very basic command for copying around 150 GB of data to the cloud:

.\rclone.exe copy D:\Data BackBlazeB2:DATA --transfers 1000 -vv

The rclone config contents with secrets removed.

type = b2
account = edited
key = edited

hello and welcome to the forum,

find a set of flag values that saturates the 1Gbps connection.
--- run a bunch of rclone copy by tweaking --transfers, --checkers, --b2-chunk-size

can check windows task manager to confirm.

Hello and thank you for replying!

That sounds like a good method, but imagine tweaking a 3 variable equation manually, and then on each iteration having to gather some minutes of data to compare.

Is it possible to somehow automate the whole process?

I would even be okay with a general rule of thumb that get me close to the ideal optimal parameters, even if leaving some performance on the table.

ok, just focus on --transfers

The --checkers parameter is defaulted to 8.
Am I right on thinking that it should be equal to or slightly less than the number of logical cores/threads of my CPU?

Regarding --b2-chunk-size, If i have a 1 Gbps connection, would it be OK to increase it from 96MiB to 1192 MiB (1 Gbps -> 119,2 MiBps * 10 seconds)?

does not seem correct.
as --b2-chunk-size is about memory, not about speed of your internet connection.

as per the rclone docs
"The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc"

fwiw, so you will have to do some basic testing, try --transfers=32 and establish a baseline.

I just tried --transfers 3 and my poor HDD is already at 100%, so I doubt increasing that will help, at least for the copy command, as I guess it will increase the number of I/Os the HDD will need to do per second and spend more and more time moving its arms into place.

Maybe for a mount I will do some tests and report back.

On the BackBlaze rclone guide they say that for 100 Mbps the --transfers parameter should be set between 50 and 100.

Any info about the --checkers flag being equal or less than my logical CPU cores / threads ?

i though you were using SSD, not HDD

with some storage providers,
for each file to be uploaded,
first, rclone calculates the checksum of the local source file
and after that, rclone uploads that file

imho, i would be most surprised that rclone would dedicate a cpu core per each --checker.
as i can set --checkers=100 but i do not have 100 cores or 100 threads

a check on a file is simply reading metadata such as modtime and/or file size.
often rclone will do that for an entire dir at a time on local.

The 150 GB are stored on my HDD, but then once uploaded I will access them by mounting the B2 on an SSD.

Shouldn't the OS split the task among all the cores instead of rClone having to specifically send a task to a core?

yes, as for the exact details, cannot tell you
but your question was about --checkers and cpu cores/threads.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.