Help with best method to move files from server to local?

What is the problem you are having with rclone?

I am trying to move large files (several hundred megabyte to several gigabyte) from a remote server to my local machine. The local machine is on a gigabit network, the remote server is on a 100gigabit network. I would like to use rclone to do this I need assistance with the right method/configuration. I am able to use rsync to saturate my network connection if I run 4-5 separate instances of rsync.. looking for something easier to use.

Run the command 'rclone version' and share the full output of the command.

$ rclone version
rclone v1.62.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.11-2-pve (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Remote server.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I have tried a few different connections these are the results. I should be able to hit >100MB/sec reliably but cant get the configuration quite right:

$ rclone move -Pv --transfers 8 --sftp-chunk-size 252K --sftp-concurrency=64 xx:/home/xx/files/ ~/test/
Transferred:   	  505.723 MiB / 9.779 GiB, 5%, 35.421 MiB/s, ETA 4m28s
Checks:                 8 / 16, 50%
Transferred:            0 / 17, 0%
Elapsed time:        16.2s
Checking:

Transferring:
 * xx:  6% /667.289Mi, 2.918Mi/s, 3m34s
 * xx:  7% /715.390Mi, 3.692Mi/s, 2m59s
 * xx:  9% /544.178Mi, 3.568Mi/s, 2m18s
 * xx: 5% /481.266Mi, 1.934Mi/s, 3m54s
 * xx:12% /543.059Mi, 4.623Mi/s, 1m43s
 * xx: 15% /510.189Mi, 5.590Mi/s, 1m16s
 * xx: 18% /500.551Mi, 6.539Mi/s, 1m2s
 * xx: 14% /656.244Mi, 6.557Mi/s, 1m25s
$ rclone copy -Pv --transfers 8 xx-new:/files ~/test/
Transferred:   	   54.870 MiB / 9.779 GiB, 1%, 12.602 MiB/s, ETA 13m10s
Transferred:            0 / 17, 0%
Elapsed time:         5.4s
Transferring:
* xx:  8% /644.473Mi, 12.602Mi/s, 46s
* xx:  0% /484.078Mi, 0/s, -
* xx:  0% /498.227Mi, 0/s, -
* xx:  0% /737.565Mi, 0/s, -
* xx:  0% /822.344Mi, 0/s, -
* xx:  0% /512.079Mi, 0/s, -
* xx:  0% /621.024Mi, 0/s, -
* xx: 0% /505.861Mi, 0/s, -

The rclone config contents with secrets removed.

$ rclone config
Current remotes:

Name                 Type
====                 ====
xx              sftp
xx-new          ftp

A log from the command with the -vv flag

The log has a lot of PII to strip out, let me know if you need this.

hello and welcome to the forum,

i found tweaking these flags makes a difference with sftp.

  • decrease --multi-thread-cutoff
  • increase --multi-thread-streams
  • increase --sftp-concurrency
  • check out --sftp-disable-hashcheck

SFTP operates on chunks (32k default) and it is request/response protocol.
You can try to increase it (+ all tweaks mentioned earlier) - subject of server support - but if your connection has not perfect latency than using rsync is much better option indeed.

That for real big files (relatively to your gigabit network) you need multiple rsync instances to saturate your connection points into high latency link .

SFTP is a poor protocol for high latency links unfortunately.

What is the ping time from you to the server?

It looks like you've spent some time adjusting your command line for high latency links which is good.

rclone move -Pv --transfers 8 --sftp-chunk-size 252K --sftp-concurrency=64 xx:/home/xx/files/ ~/test/

You can increase the concurrency more - that should help. Try 256.

I'd suggest increasing --transfers assuming there are enough files to transfer. This should give you linear speedup until you run out of CPU on the server or the client.

If running something on the remote server is an option, you could run rclone serve webdav and use the webdav backend which doesn't have the latency problem.

It might also be worth giving the latest beta a try.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.