I occasionally get corrupt reads when using rclone mount with SFTP. Second issue is that by default rclone also really struggles to even saturate 1Gbps.
Run the command 'rclone version' and share the full output of the command
rclone v1.71.1
os/version: Microsoft Windows 11 Pro 24H2 24H2 (64 bit)
os/kernel: 10.0.26100.6725 (x86_64)
os/type: windows
os/arch: amd64
go/version: go1.25.1
go/linking: static
go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
sftp
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone mount --vfs-cache-mode=writes --vfs-cache-min-free-space=1G --vfs-cache-max-size=40G remote:'/share' '\\remote\share'
The issue also occasionally occurs (or persists?) with –-vfs-cache-mode=none.
The rclone config contents with secrets removed
[remote]
type = sftp
host = host.example.com
user = example
key_use_agent = true
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
A log from the command with the -vv flag
Unfortunately it occurs so infrequently, that I haven’t yet been able to get a -vv log when such an error occurs. It’s even difficult to notice any corruptions in the first place.
Additional information
After messing around with deleting cache from local disk, killing anything that might have used the mount and passing --vfs-cache-max-age=1s --vfs-cache-poll-interval=1s --vfs-refresh --cache-db-purge to rclone I at least managed to read the previously corrupt file properly. But I am unsure which flag finally did the trick. Nor does it inspire confidence that there is no issue (that there are no other undetected corruptions). Is there a sure way to disable all read caching?
I also noticed that --vfs-cache-min-free-space=1G --vfs-cache-max-size=30G does not actually constrain the write cache size (and potentially slow it down to remote speed) when full. Local disk can still totally fill up. Are there ways to ensure that caching writes does not fill up local disk and will instead throttle writes?
Lastly, to try and increase read-write speeds to the remote, I used --vfs-read-chunk-streams=32 --checkers=16 --transfers=16 --use-server-modtime --sftp-chunk-size=254k. Then transfer speeds peak at ~6Gbps and sustained read is ~2Gbps. Which is still significantly lower than remote disk speed and both target and remote are well under CPU or link saturation. Is there a recommended set of flags for achieving better transfer speeds over SFTP?
I’m keeping an eye on it, but I usually notice the errors way later. It doesn’t happen all the time with each file.
Done, removed all of those.
Hmm. That’s a bit unfortunate, as I can’t probably tell rclone to throttle the write cache to be slightly slower than the connection to the remote?
I haven’t tried any others on Windows, nor do I think those should set the bar. If there even are any that let you mount SFTP?
repeat, use a debug log, look for issues, such as pacer, retry, retries, ERROR, timeout
sftp suffers from very small packet size and latency as well as file transfer verification using hashes.
so test filezilla, curl, etc, and see what speeds you get.
rsync runs great on windows and is optimized for sftp.
well, the vfs file cache is not required for reading, streaming.
and in some cases, the cache is not needed for writing, uploading.
Can I increase the packet size beyond the SFTP 254k chunk somehow?
Latency should be in the microseconds though, as it’s a multi-gigabit link.
Will do, if they support key-based authentication.
Yes, that’s the plan.
If I could get the cache not to fill up the entire disk, I could pipe it there and then inspect it later should I find a corruption. Until that I’ll just have to keep an eye on the terminal output and hope I spot a corruption while the relevant log is still in the scrollback buffer.