How to reach optimal speeds with rclone?

Hi Folks,

Hope everyone is doing well!

Thanks to everyone in Rclone for their great work, and thanks to everyone willing to help, I appreciate it, means a lot to me!

What is the problem you are having with rclone?

I am using rclone to syn around ~2TB of data to a mechanical HDD-5400RPM.
It's taking way too long.

I have exhausted all I can figure out on my own with the flags.

Please help me on how can I improve the sync and check speeds to get as close as possible to maximum/optimal?

Run the command 'rclone version' and share the full output of the command.

$ rclone --version
rclone v1.67.0
- os/version: ubuntu 24.04 (64 bit)
- os/kernel: 6.8.0-40-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

  • SMB to locally mounted SSD.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Here is the rclone sync command I am running with the flags:

    rclone sync \
      "$src" \
      "$dest" \
      --dry-run \
      --transfers=8 \
      --checkers=8 \
      --log-level="DEBUG" \
      --log-file="$LOG_FILE" \
      --progress \
      --copy-links \
      --checksum \
      --bwlimit=192M \
      --retries=1 \
      --retries-sleep=1s \
      --fast-list \
      --use-mmap \
      --delete-during \
      --buffer-size="16M" \
      --cache-chunk-total-size="8G" \
      --cache-tmp-upload-path="/home/usertemp/rclone_cache_temp/tmp_upload" \
      --cache-chunk-path="/home/usertemp/rclone_cache_temp/chunks" \
      --cache-info-age="1h"

And here is the rclone check command I am running with the flags:

    rclone check \
      "$src" \
      "$dest" \
      --dry-run \
      --checkers=8 \
      --log-level="DEBUG" \
      --log-file="$LOG_FILE" \
      --progress \
      --checksum \
      --bwlimit=192M \
      --retries=1 \
      --retries-sleep=1s \
      --fast-list \
      --buffer-size="16M" \
      --one-way \
      --multi-thread-streams=4 \
      --multi-thread-chunk-size=64Mi \
      --multi-thread-write-buffer-size=64Mi \
      --download

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[localdest1]
type = local
encoding = Asterisk,BackQuote,BackSlash,Colon,CrLf,Ctl,Del,Dollar,Dot,DoubleQuote,Hash,InvalidUtf8,LeftCrLfHtVt,LeftPeriod,LeftSpace,LeftTilde,LtGt,Percent,Pipe,Question,RightCrLfHtVt,RightPeriod,RightSpace,Semicolon,SingleQuote,Slash,SquareBracket

[localdest2]
type = local
encoding = Asterisk,BackQuote,BackSlash,Colon,CrLf,Ctl,Del,Dollar,Dot,DoubleQuote,Hash,InvalidUtf8,LeftCrLfHtVt,LeftPeriod,LeftSpace,LeftTilde,LtGt,Percent,Pipe,Question,RightCrLfHtVt,RightPeriod,RightSpace,Semicolon,SingleQuote,Slash,SquareBracket

[localdest3]
type = local
encoding = Asterisk,BackQuote,BackSlash,Colon,CrLf,Ctl,Del,Dollar,Dot,DoubleQuote,Hash,InvalidUtf8,LeftCrLfHtVt,LeftPeriod,LeftSpace,LeftTilde,LtGt,Percent,Pipe,Question,RightCrLfHtVt,RightPeriod,RightSpace,Semicolon,SingleQuote,Slash,SquareBracket

[localdest4]
type = local
encoding = Asterisk,BackQuote,BackSlash,Colon,CrLf,Ctl,Del,Dollar,Dot,DoubleQuote,Hash,InvalidUtf8,LeftCrLfHtVt,LeftPeriod,LeftSpace,LeftTilde,LtGt,Percent,Pipe,Question,RightCrLfHtVt,RightPeriod,RightSpace,Semicolon,SingleQuote,Slash,SquareBracket

[localsrc1]
type = local

[localsrc2]
type = local

[localsrc3]
type = local

[localsrc4]
type = local

[tns-google-drive-smb-share]
type = smb
host = XXX
user = XXX
pass = XXX
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

The log is too big to fit in pastebin, I will provide one as soon as I can.

Lower number of transfers and checkers. 1 and 1 is probably fine for large files. You can try to increase it a bit. It is mechanical drive. It won’t magically read multiple files in parallel:) if you try, it will be most likely slower. As for many small files…. It will be painfully slow. Not much you can do about it.

1 Like

as compare to what?
rsync --checksum --whole-file

have you tested a native mount, not using rclone smb remote?

1 Like

kapitainsky
Lower number of transfers and checkers. 1 and 1 is probably fine for large files. You can try to increase it a bit. It is mechanical drive. It won’t magically read multiple files in parallel:) if you try, it will be most likely slower. As for many small files…. It will be painfully slow. Not much you can do about it.

Thank you very much for the advice! It makes sense. ( :

I will give it a try with 1 and 1, and maaaybe try it with 2 and 2.

It is mechanical drive. It won’t magically read multiple files in parallel:)

Makes total sense. :smiley:

As for many small files…. It will be painfully slow. Not much you can do about it.

Yeah, lower number of transfers and checkers, should be good performance for large files, which is about half of the data set, the rest is unfortunately smaller files....

Will give those a try as soon as I get a chance and will report back. :slight_smile:

Thank you for your help, I appreciate it. ( :

Not compared to anything really, just want to make it as quick as possible, without sacrificing data integrity.

have you tested a native mount, not using rclone smb remote?

Yup, with native mount it seems to not respect the --bwlimit flag, so I can't throttle it if I need to, whereas with SMB it seems to work fine.
Speed wise I did not notice any difference.

using --checksum with sync is most likely the cause of the slowdown
to decide what files need to be copied, most users use the default, modtime and size.

and really should double-check flags, figure out what they do and if you need them?

--fast-list does nothing on local
--buffer-size="16M" - why use that?


the flags listed below are for deprecated cache remote, which you are not using?
and has been removed from the website documentation.

--cache-chunk-total-size="8G" \
--cache-tmp-upload-path="/home/usertemp/rclone_cache_temp/tmp_upload" \
--cache-chunk-path="/home/usertemp/rclone_cache_temp/chunks" \
--cache-info-age="1h"
1 Like

Thanks I appreciate your reply! :slight_smile:

That makes sense, I will use --checksum only for the rclone check, to ensure data integrity.


I am currently using rclone with an SMB source and a local destination, doesn't --fast-list take effect in this case?

--buffer-size="16M" - This is the default value as far as I know, I tried to play around with as much as "32M", but did not see a notable difference.

Do you have any advice as to what value I could try in my case?


I asked ChatGPT for a tip, and this is one of the things it came up with...
Actual Intelligence is still better than Artificial Intelligence! :smiley:

Are there any modern alternatives to these flags which could help in my case?

the hard limit is the mechanical HDD-5400RPM
fwiw, just use defaults, remove most of the flags.

no. and even if it was supported, would not make a practical difference with your setup.

rclone backend features local: | grep "ListR"
                "ListR": false,

rclone backend features smb: | grep "ListR"
                "ListR": false

rclone backend features aws: | grep "ListR"
                "ListR": true,

--checksum will not work. smb remotes do not support checksums.
the workaround is --download
a debug log would show that.

1 Like

@asdffdsa, thank you once again for your useful reply!

How does the --download flag work, does it download the files to the OS disk and then run a checksum, or do I need to add the --checksum flag after it, or is rclone check --download sufficient?

Would --sparse help in my case?

it reads the entire file, in chunks to memory. does not actually save the file.

yes.

i do not think so.

1 Like

Awesome, thank you!

I will overview the whole topic and all replies and give the suggestions a try as soon as I get some time, and report back. ( :

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.