What is the problem you are having with rclone?
I am trying to use rclone sftp backend on a local network to transfer files. Rclone's sftp implementation is substantially slower than mounting a simple sshfs or using the filesystem's sftp.
Run the command 'rclone version' and share the full output of the command.
rclone v1.59.1
Which cloud storage system are you using? (eg Google Drive)
sftp
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone mount remote /local/mount
The rclone config contents with secrets removed.
[remote]
user = username
type = sftp
host = localhost
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
A log from the command with the -vv
flag
2023/05/08 16:11:56 DEBUG : testfile.bin: Attr:
2023/05/08 16:11:56 DEBUG : testfile.bin: >Attr: a=valid=1s ino=0 size=965738496 mode=-rw-rw-r--, err=<nil>
2023/05/08 16:11:56 DEBUG : testfile.bin: Open: flags=OpenReadOnly
2023/05/08 16:11:56 DEBUG : testfile.bin: Open: flags=O_RDONLY
2023/05/08 16:11:56 DEBUG : testfile.bin: >Open: fd=testfile.bin (r), err=<nil>
2023/05/08 16:11:56 DEBUG : testfile.bin: >Open: fh=&{testfile.bin (r)}, err=<nil>
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=0
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=131072
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.openRange at 0 length 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 0 length 4096 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 4096 length 8192 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 12288 length 16384 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 28672 length 32768 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 61440 length 65536 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 126976 length 131072 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 258048 length 262144 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 520192 length 524288 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=262144
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=393216
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:11:56 DEBUG : testfile.bin: ChunkedReader.Read at 1044480 length 1048576 chunkOffset 0 chunkSize 134217728
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=655360
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=524288
2023/05/08 16:11:56 DEBUG : testfile.bin: waiting for in-sequence read to 655360 for 20ms
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:11:56 DEBUG : &{testfile.bin (r)}: Read: len=131072, offset=917504
2023/05/08 16:11:56 DEBUG : testfile.bin: waiting for in-sequence read to 917504 for 20ms
...
2023/05/08 16:12:41 DEBUG : testfile.bin: waiting for in-sequence read to 965607424 for 20ms
2023/05/08 16:12:41 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:12:41 DEBUG : &{testfile.bin (r)}: >Read: read=131072, err=<nil>
2023/05/08 16:12:41 DEBUG : &{testfile.bin (r)}: Flush:
2023/05/08 16:12:41 DEBUG : &{testfile.bin (r)}: >Flush: err=<nil>
2023/05/08 16:12:41 DEBUG : &{testfile.bin (r)}: Release:
2023/05/08 16:12:41 DEBUG : testfile.bin: ReadFileHandle.Release closing
2023/05/08 16:12:41 DEBUG : &{testfile.bin (r)}: >Release: err=<nil>
2023/05/08 16:12:41 DEBUG : /: Lookup: name="testfile.bin"
2023/05/08 16:12:41 DEBUG : /: >Lookup: node=testfile.bin, err=<nil>
2023/05/08 16:12:41 DEBUG : testfile.bin: Attr:
2023/05/08 16:12:41 DEBUG : testfile.bin: >Attr: a=valid=1s ino=0 size=965738496 mode=-rw-rw-r--, err=<nil>
2023/05/08 16:13:38 DEBUG : sftp://user@x.x.x.x:port/: Closing 2 unused connections
Test setup
I mounted the same locally remote directory with both rclone sftp and sshfs:
- Mount the rclone sftp directory with the above settings.
- Mount an sshfs directory with the following settings:
sshfs -o rw,follow_symlinks,reconnect,ServerAliveInterval=5,compression=no user@remote: /mount/sshfs
- I then use Dolphin to copy a 1gb file from the rclone sftp mounted drive and record the time.
- Repeat the same file copy using the simple sshfs mount and record the time.
Test Results
- rclone sftp copies the file at 44 seconds (around 185 Mb/s)
- sshfs copies the file at 30 seconds (around 270MB/s).
This is substantially faster (150%) than rclone and I can't figure out why. I have tried adjusting many rclone settings, adding flags with varying values and this has only made a marginal difference.
Different trial settings:
Here is a list of the settings I have tried to adjust:
- Add larger transfers to the rclone mount config (--transfers=50)
- Add large transfers (--transfers=200)
- Add very large transfers (--transfers=2000)
- Remove checksum flag ( --no-checksum)
- Remove md5 and sha1 commands from the remote config (md5sum_command = md5sum
sha1sum_command = sha1sum) - Add chunk size flag and adjust various values from large to small (--sftp-chunk-size large, small)
- Add -bw limit flag and adjust various values from large to small (--bwlimit-file large small)
- Try all 3 values for vfs cache (--vfs-cache-mode=none, off/minimal/writes/full)
I based these trials on the following thread discussions:
- SFTP 80% slower (on 16ms ping) - #4 by ncw
- Slow SFTP transfers compared to alternatives - #12 by justusiv
- SFTP and ability to open multiple SSH connections vs multi transfer with 1 SSH connection · Issue #1561 · rclone/rclone · GitHub
- Using rclone to cache NAS files to work from local SSD instead of network: SMB or SFTP or something else? - #4 by asdffdsa
- SFTP/S3 Performance Tuning - #3 by alan113696
The results of adjusting all of these settings are disappointing. I am able to see a very minor change from 43 to 49 seconds (170-190Mb/s average) to transfer the 1gb file but nothing approaching the simple sshfs transfer of 30 seconds (around 270MB/s).
This doesn't really make sense from a conceptual point of view. Isn't sshfs a more complex, heavier and capable protocol. Nothing I have tried gets the rclone implementation of sftp to approach the speed of the simple sshfs mount.
Final test
As a final test I setup a remote using Dolphin's sftp remote template. This also uses sftp and it eliminates the FUSE sshfs mount component of the test. The result was a transfer of 32 seconds, similar to sshfs. This indicates that the issue is not sftp versus sshfs but either rclone's implementation of sftp or a configuration problem with my remote or mount setup.
Versions of other software
Here is the sshfs version I am using:
Local:
SSHFS version 3.7.1
FUSE library version 3.10.5
using FUSE kernel interface version 7.31
fusermount3 version: 3.10.5
Local/Remote:
OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022
Analysis
I tried asking chatGPT as well but it did not product anything of value. Any suggestions on what I can try or what I have overlooked? I would have expected the two solutions to be in the same ballpark. I really want to use rclone's sftp implementation but I can justify it being so much slower.