my config is crypt -> sshfs and seemingly works but performance degrades over time, and I get lots of high intermittent CPU usage with rclone.
I ran rclone in non daemon mode and I am seeing lots of the following error: vfs cache: failed to upload try #8, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: Update Create failed: sftp: "Bad message" (SSH_FX_BAD_MESSAGE)
According to ChatGPT lots of things can cause this error such as characters in the filename as well as the length of the filename.
Whilst this (hopefully) could be addressed it isn't my main concern. If I have 10,000 files which I am trying to rename which do not fit in to a supported naming convention in theory could this not back up 10,000 retries which could easily spike the cpu and memory every so often (5m?) as the cache flush is attempted ? In a slightly worse situation, it will never resolve itself either. I also wonder if the flushing could be offset with subsequent moves (before a daemon restart) at a different time.
It appears to me that if SSHFS fails then the file will forever be in limbo in the VFS cache and will never succeed and could result in lost data
What is the problem you are having with rclone?
Run the command 'rclone version' and share the full output of the command.
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Paste command here
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[dellboy_alias_ssh]
type = alias
remote = hashing_ssh:/mnt/Four_TB_Array/encrypted/
[dellboy_encrypted_folder_ssh]
type = crypt
remote = dellboy_alias_ssh:
password = XXX
password2 = XXX
[dellboy_sshfs]
type = sftp
host = XXX
pass = XXX
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
[hashing_ssh]
type = hasher
remote = dellboy_sshfs:
hashes = sha1
Paste config here
A log from the command that you were trying to run with the -vv flag
hasher isn’t anything to do with the issue as far as I am aware. It seems to me as if it’s the limitations of SSHFS not being enforced by rclone in combination with vfs allowing the files to be cached instead of reporting that the move is not possible.
I also tried renaming the file in the vfs directory, as I assumed it would move the file if the length was shorter but it appeared the vfs component deleted the file instead of transferring it.
to be fair I don’t entirely know what the full function of the VFS cache folder is. Does it store files that are to be moved or is it purely a local copy of something that has already happened ? If vfscache is set to full does the vfs cache store files that are going to be moved/modified ?
What if I have --vfscache-cacache-mode=full and I create a files on a remote to a sshfs server that has a filename that is 500 characters long which it will never accept but my local machine will. What happens to the file ?
I have watched it retrying in a loop. But imo It shouldn’t. It should never accept the file as it will never get written. Ever. Which will result in the person thinking that the file has been written, but it hassn’t until such time as the vfs cache is deleted or forgotten about, along with the missing/lost data.
There a million SFTP servers out there and what one accepts or rejects would be quite tough to maintain. Rclone wouldn’t know until it tries to write the file to that particular SFTP backend.
That’s why for any critical stuff, you’d really want to watch logs regardless. No data would be lost unless someone randomly blew away the cache without checking. That’s the case for all files in there if they have not bee written yet.
There’s no log file so it’s tough to guess what to advise as it’s unknown what is the specific failure or why.
I have file encrpytion turned on too. which may affect the filename length.
I was very surprised that when I made the filename shorter in the vfs cache, I hoped it would transfer it successfully, but it just deleted it. Does the VFS cache engine track filenames and delete things that it has no record of successfully transferring ?
You are using default base32 (5 bits per character) names encoding for your crypt remote. It is the least efficient encoding when it comes to file name length - 100 characters file name requires ≈160 characters when encrypted. If your remote allows case sensitive file names then use base64 (6 bits per character).
And the real game changer is base32768 (15 bits per character) but you should test first if your remote can use it:
rclone test info --check-length --check-base32768 dellboy_sshfs:test_dir
Thanks for taking the time to respond. I did see somewhere that you can modify the name encoding in situ. Do you know where it is detailed how to do this please ?
You simply create another crypt remote config (with new names encoding) with the same password(s) but different base directory and move/copy files using --server-side-across-configs flag
not ideal for for base32768 but still worth to consider. 127 * 15 bits / 8 bits = 238 one byte characters. Better than base64 (255 * 6 / 8 = 191).
The key is that all required characters are supported.