SFTP - Max transfers value vs MV parameters

What is the problem you are having with rclone?

I don't know why transfer data is too slow. (~ 10MB/s), with the default parameter set --transfers. When I have changed to --transfers=16 transfer will up to 20-30 MB/s, --transfers=32 will up to 50 MB/s. Please give me some advice what should be the best value parameter --transfers base on ma source machine to balance transfer and load average of system? What another parameters can I be able to set to booster the tranfer files.

I copy about 100 files which have 1-80GB.

Source machine:
CPU: 8 X AMD EPYC 9474F
RAM: 32GB
DISK: 3TB NVME
Network: Download 8Gb/s, Upload 7Gb/s

Destination machine:
CPU: 8 X Intel core i7-14700T
RAM: 8GB
DISK: 2TB SSD
Network: Download 750Mb/s, Upload 600Mb/s/s

Run the command 'rclone version' and share the full output of the command.

rclone v1.71.0

  • os/version: almalinux 9.6 (64 bit)
  • os/kernel: 5.14.0-570.46.1.el9_6.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.6 (Red Hat 1.24.6-1.el9_6)
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

SFTP

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /srv/backups/ SFTP:backups/date +"%Y-%m-%d"/"$HOSTIP" --transfers=16 --log-file="$LOGFILE" --log-level=INFO

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[SFTP]
type = sftp
host = XXX
user = XXX
port = XXX
pass = XXX
key_file_pass = XXX
shell_type = unix
md5sum_command = none
sha1sum_command = none

welcome to the forum,

what other sftp copy tools have you tested?


only you can answer that by testing different values for --transfers


tweak --checkers

Some days rclone with default parameter --transfers had about 40-50 MB/s and it was correct transfer. One more thing, Rclone use CPU 11.8% and Memory 1.2% with --transfers=16.

sorry, not sure what you need help with?

I need advice how set --transfers should I set to prevent overload cpu. Is exist any calculation transfers vs parameters of computer?

What else can I configure to optimize transfer speeds?

no.


what do you mean by optimize?


and asking again?
what other sftp copy tools have you tested, what are the results as compared to rclone??

@asdffdsa I used rsync and syncthing and achieved transfer speeds ~ 40-50 MB/s.

What do you think the --transfers parameter should be set to relative to my server's parameters? --transfers=32 will be correct?

rclone is about the same as rsync and syncthing.


you have to do basic testing, then you will know....

When the number of files to be copied is less than --transfers=16, the transfer speed decreases. Why transfer per one file is that small?

Transferred:      531.789 GiB / 749.524 GiB, 71%, 11.824 MiB/s, ETA 5h14m16s
Checks:                 0 / 0, -, Listed 389
Transferred:          327 / 336, 97%
Elapsed time:    7h33m0.0s
Transferring:
 *          data/user.test.bron.tar.zst: 73% /47.918Gi, 1.305Mi/s, 2h46m36s
 *          data/user.test.zajc.tar.zst: 65% /52.112Gi, 1.307Mi/s, 3h52m15s
 *          data/user.test.pier.tar.zst: 44% /58.963Gi, 1.302Mi/s, 7h10m44s
 *          data/user.test.mar.tar.zst: 63% /38.104Gi, 1.311Mi/s, 2h59m16s
 *          data/user.test.uls.tar.zst: 43% /45.982Gi, 1.287Mi/s, 5h41m24s
 *          data/user.test.wal.tar.zst: 85% /22.023Gi, 1.318Mi/s, 41m48s
 *          data/user.test.gru.tar.zst: 56% /29.700Gi, 1.330Mi/s, 2h45m49s
 *          data/user.test.opt.tar.zst: 23% /62.168Gi, 1.359Mi/s, 9h57m9s
 *          data/user.test.turo.tar.zst: 20% /63.948Gi, 1.330Mi/s, 10h56m1s

Because most likely your internet connection has high latency (lag). Looking at your specs it is probably so called Long Fat Network:) And SFTP protocol nature is such that speed decreases very fast when latency increases. Google if you are interested about details, e.g. some random link - Network Latency and SFTP – Jadaptive Limited

10 gbit internet link with high latency can be much slower than 1 gbit with low latency - for single sftp session transfer.

You either improve your network latency (often impossible) or use multiple parallel connections.

Also try to increase chunk size - check rclone and your sftp server docs for details.

If your priority is speed then choose different protocol. For example s3 performs much better on high latency links - it can massively parallelise transfers even for single file (multipart transfers).

I changed SFTP to S3 and now speed transfer is it correct. What do youn think about parametre what I set? Is it correct?

test@cloudtest:~$ sudo rclone copy /srv/backups/ S3:backups/`date +"%Y-%m-%d"`/"XXX.XXX.XXX.XXX" --progress  --transfers=8 --checkers=16 --s3-chunk-size=128M --s3-upload-concurrency=8 --fast-list
Transferred:        5.052 GiB / 755.211 GiB, 1%, 50.631 MiB/s, ETA 4h12m51s
Checks:               211 / 211, 100%, Listed 600
Transferred:            0 / 125, 0%
Elapsed time:      2m25.0s
Transferring:
 *                    data/admin.test.test.tar.zst:  2% /16.962Gi, 6.339Mi/s, 44m34s
 *                 data/user.test.agam.tar.zst:  7% /8.361Gi, 6.416Mi/s, 20m38s
 *                 data/user.test.agn.tar.zst:  2% /16.656Gi, 6.363Mi/s, 43m35s
 *                 data/user.test.tru.tar.zst:  4% /11.085Gi, 6.530Mi/s, 27m32s
 *                 data/user.test.shpa.tar.zst: 63% /1.221Gi, 6.457Mi/s, 1m11s
 *                 data/user.test.biln.tar.zst: 67% /1.171Gi, 6.390Mi/s, 1m1s
 *                data/user.test.serw.tar.zst: 73% /1.065Gi, 6.384Mi/s, 44s
 *                 data/user.test.kubi.tar.zst: 79% /957.188Mi, 5.655Mi/s, 34s

it looks good. as 50.631 MiB/s is close to the maximum upload of Upload 600Mb/s/s

1 Like

I don't know why speed transfer isn't stable. In peak I have 60 MB/s but suddenly speed transfer dropping to 3 MB/s.

use a rclone debug log

# Rclone copy debug – low transfer speed to MinIO

2025/10/29 13:30:05 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:               336 / 336, 100%, Listed 726
Transferring:
 *          data/user.test.test12.tar.zst: transferring

# Multipart upload started, using multi-threaded upload
2025/10/29 13:30:49 DEBUG : data/user.test.test12.tar.zst: open chunk writer: started multipart upload
2025/10/29 13:30:49 DEBUG : multi-thread copy: using backend concurrency of 8
2025/10/29 13:30:49 DEBUG : multi-thread copy: starting 8 parallel streams with 238 chunks of 128Mi

# Example chunks and Seek – shows repeated reads
2025/10/29 13:30:50 DEBUG : Seek from 134217728 to 0
2025/10/29 13:30:50 DEBUG : Seek from 134217728 to 0
2025/10/29 13:31:04 DEBUG : multipart upload wrote chunk 4 with 134217728 bytes
2025/10/29 13:31:04 DEBUG : multi-thread copy: chunk 4/238 finished
2025/10/29 13:31:04 DEBUG : multi-thread copy: chunk 9/238 starting

# Transfer speed drop
2025/10/29 13:31:05 INFO  :
Transferred:        1.006 GiB / 29.729 GiB, 3%, 40.011 MiB/s, ETA 12m15s
Transferring:
 *          data/user.test.test12.tar.zst:  3% /29.729Gi, 68.266Mi/s, 7m10s

2025/10/29 13:32:05 INFO  :
Transferred:        1.130 GiB / 29.729 GiB, 4%, 1.966 MiB/s, ETA 4h8m12s
Transferring:
 *          data/user.test.test12.tar.zst:  3% /29.729Gi, 1.966Mi/s, 4h8m12s

# Further DEBUG entries – continuing multipart upload and repeated Seek
2025/10/29 13:32:06 DEBUG : multipart upload wrote chunk 6 with 134217728 bytes
2025/10/29 13:32:06 DEBUG : multi-thread copy: chunk 6/238 finished
2025/10/29 13:32:06 DEBUG : multipart upload wrote chunk 7 with 134217728 bytes
2025/10/29 13:32:06 DEBUG : multi-thread copy: chunk 7/238 finished
2025/10/29 13:32:07 DEBUG : Seek from 134217728 to 0
2025/10/29 13:32:07 DEBUG : Seek from 134217728 to 0
2025/10/29 13:32:08 DEBUG : multipart upload wrote chunk 9 with 134217728 bytes
2025/10/29 13:32:08 DEBUG : multi-thread copy: chunk 9/238 finished

post the summary text after the command completes.

I customized parameters of rclone and now speed transfer is stable.

--transfers=8 --checkers=16 --s3-chunk-size=256M --s3-upload-concurrency=8 --fast-list