Rclone crypt silently corrupts files ocassionally

What is the problem you are having with rclone?

Looks like there is rare corruption with rclone crypt module. It happens occasionally, but it feels like once per 100 of 1GB files. They are age encrypted zstd compressed tar files split to 1GB chunks. I am mentioning it, because while it's unlikely to matter, it can give us some clues. To make sure it's crypt fault, i made 6 tests (3 per machine) and it happens on 2 different machines both using SFTP. When i copy files without crypt module (to same SFTP servers) and check md5sums as presented below, every time i have correct sums. Problem occurs for different files copied using crypt module - so if i twice upload same files, i get different corruption files.

If will be nice if the files are not really corrupted, because before i realized there is problem, i was using move command. If it turns out to be there easy way to fix damaged files, it will be nice to know. If not, oh well, i will need to redo my work from last couple of days - annoing but not huge issue.

When i do copy comamnd, it looks like it copy files just fine.

[nikow@thinira delivery]$ rclone -v -P --order-by name --checksum copy /storage/offload/delivery/2024-09-20/ aleksandria-001-vault:delivery/2024-09-20/ 
2024/09/20 12:01:09 INFO : 2024-09-20.data.00001: Copied (new) 
2024/09/20 12:01:10 INFO : 2024-09-20.data.00003: Copied (new) 
2024/09/20 12:01:12 INFO : 2024-09-20.data.00002: Copied (new) 
2024/09/20 12:01:13 INFO : 2024-09-20.data.00000: Copied (new) 
2024/09/20 12:04:35 INFO : 2024-09-20.data.00005: Copied (new) 
2024/09/20 12:04:35 INFO : 2024-09-20.data.00004: Copied (new) 
2024/09/20 12:04:36 INFO : 2024-09-20.data.00007: Copied (new) 
2024/09/20 12:04:36 INFO : 2024-09-20.data.00006: Copied (new) 
2024/09/20 12:08:16 INFO : 2024-09-20.data.00009: Copied (new) 
2024/09/20 12:08:22 INFO : 2024-09-20.data.00011: Copied (new) 
2024/09/20 12:08:23 INFO : 2024-09-20.data.00008: Copied (new) 
2024/09/20 12:08:28 INFO : 2024-09-20.data.00010: Copied (new) 
2024/09/20 12:12:47 INFO : 2024-09-20.data.00012: Copied (new) 
2024/09/20 12:12:54 INFO : 2024-09-20.data.00014: Copied (new) 
2024/09/20 12:13:04 INFO : 2024-09-20.data.00015: Copied (new) 
2024/09/20 12:13:11 INFO : 2024-09-20.data.00013: Copied (new) 
2024/09/20 12:16:44 INFO : 2024-09-20.data.00016: Copied (new) 
2024/09/20 12:16:46 INFO : 2024-09-20.data.00017: Copied (new) 
2024/09/20 12:17:01 INFO : 2024-09-20.data.00019: Copied (new) 
2024/09/20 12:17:03 INFO : 2024-09-20.data.00018: Copied (new) 
2024/09/20 12:20:23 INFO : 2024-09-20.data.00021: Copied (new) 
2024/09/20 12:20:23 INFO : 2024-09-20.data.00020: Copied (new) 
2024/09/20 12:20:33 INFO : 2024-09-20.data.00022: Copied (new) 
2024/09/20 12:20:33 INFO : 2024-09-20.data.00023: Copied (new) 
2024/09/20 12:23:57 INFO : 2024-09-20.data.00024: Copied (new) 
2024/09/20 12:23:57 INFO : 2024-09-20.data.00025: Copied (new) 
2024/09/20 12:24:00 INFO : 2024-09-20.data.00027: Copied (new) 
2024/09/20 12:24:03 INFO : 2024-09-20.data.00026: Copied (new)
2024/09/20 12:27:25 INFO : 2024-09-20.data.00028: Copied (new) 
2024/09/20 12:27:27 INFO : 2024-09-20.data.00031: Copied (new)
2024/09/20 12:27:27 INFO : 2024-09-20.data.00029: Copied (new)
2024/09/20 12:27:27 INFO : 2024-09-20.data.00030: Copied (new)
2024/09/20 12:30:48 INFO : 2024-09-20.data.00032: Copied (new)
2024/09/20 12:30:48 INFO : 2024-09-20.data.00033: Copied (new)
2024/09/20 12:30:49 INFO : 2024-09-20.data.00034: Copied (new)
2024/09/20 12:30:50 INFO : 2024-09-20.data.00035: Copied (new)
2024/09/20 12:34:08 INFO : 2024-09-20.data.00036: Copied (new)
2024/09/20 12:34:12 INFO : 2024-09-20.data.00038: Copied (new)
2024/09/20 12:34:13 INFO : 2024-09-20.data.00037: Copied (new)
2024/09/20 12:34:14 INFO : 2024-09-20.data.00039: Copied (new)
2024/09/20 12:37:37 INFO : 2024-09-20.data.00040: Copied (new)
2024/09/20 12:37:37 INFO : 2024-09-20.data.00041: Copied (new)
2024/09/20 12:37:39 INFO : 2024-09-20.data.00043: Copied (new)
2024/09/20 12:37:39 INFO : 2024-09-20.data.00042: Copied (new)
2024/09/20 12:40:59 INFO : 2024-09-20.data.00044: Copied (new)
2024/09/20 12:41:00 INFO : 2024-09-20.data.00045: Copied (new)
2024/09/20 12:41:00 INFO : 2024-09-20.data.00046: Copied (new)
2024/09/20 12:41:00 INFO : 2024-09-20.data.00047: Copied (new)
2024/09/20 12:44:27 INFO : 2024-09-20.data.00048: Copied (new)
2024/09/20 12:44:33 INFO : 2024-09-20.data.00049: Copied (new)
2024/09/20 12:44:34 INFO : 2024-09-20.data.00051: Copied (new)
2024/09/20 12:44:37 INFO : 2024-09-20.data.00050: Copied (new)
2024/09/20 12:47:48 INFO : 2024-09-20.data.00052: Copied (new)
2024/09/20 12:47:54 INFO : 2024-09-20.data.00053: Copied (new)
2024/09/20 12:47:54 INFO : 2024-09-20.data.00055: Copied (new)
2024/09/20 12:47:56 INFO : 2024-09-20.data.00054: Copied (new)
2024/09/20 12:51:15 INFO : 2024-09-20.data.00056: Copied (new)
2024/09/20 12:51:18 INFO : 2024-09-20.data.00057: Copied (new)
2024/09/20 12:51:19 INFO : 2024-09-20.data.00058: Copied (new)
2024/09/20 12:51:22 INFO : 2024-09-20.data.00059: Copied (new)
2024/09/20 12:54:43 INFO : 2024-09-20.data.00060: Copied (new)
2024/09/20 12:54:43 INFO : 2024-09-20.data.00061: Copied (new)
2024/09/20 12:54:44 INFO : 2024-09-20.data.00063: Copied (new)
2024/09/20 12:54:44 INFO : 2024-09-20.data.00062: Copied (new)
2024/09/20 12:58:06 INFO : 2024-09-20.data.00064: Copied (new)
2024/09/20 12:58:06 INFO : 2024-09-20.data.00065: Copied (new)
2024/09/20 12:58:07 INFO : 2024-09-20.data.00067: Copied (new)
2024/09/20 12:58:07 INFO : 2024-09-20.data.00066: Copied (new)
2024/09/20 13:01:26 INFO : 2024-09-20.data.00068: Copied (new)
2024/09/20 13:01:37 INFO : 2024-09-20.data.00070: Copied (new)
2024/09/20 13:01:40 INFO : 2024-09-20.data.00069: Copied (new)
2024/09/20 13:01:40 INFO : 2024-09-20.data.00071: Copied (new)
2024/09/20 13:04:56 INFO : 2024-09-20.data.00072: Copied (new)
2024/09/20 13:05:02 INFO : 2024-09-20.data.00073: Copied (new)
2024/09/20 13:05:04 INFO : 2024-09-20.data.00075: Copied (new)
2024/09/20 13:05:08 INFO : 2024-09-20.data.00074: Copied (new)
2024/09/20 13:08:12 INFO : 2024-09-20.data.00076: Copied (new)
2024/09/20 13:08:28 INFO : 2024-09-20.data.00078: Copied (new)
2024/09/20 13:08:28 INFO : 2024-09-20.data.00079: Copied (new)
2024/09/20 13:08:39 INFO : 2024-09-20.data.00077: Copied (new)
2024/09/20 13:11:36 INFO : 2024-09-20.data.00080: Copied (new)
2024/09/20 13:11:45 INFO : 2024-09-20.data.00081: Copied (new)
2024/09/20 13:11:47 INFO : 2024-09-20.data.00082: Copied (new)
2024/09/20 13:11:58 INFO : 2024-09-20.data.00083: Copied (new)
2024/09/20 13:14:59 INFO : 2024-09-20.data.00084: Copied (new)
2024/09/20 13:15:04 INFO : 2024-09-20.data.00085: Copied (new)
2024/09/20 13:15:07 INFO : 2024-09-20.data.00086: Copied (new)
2024/09/20 13:15:08 INFO : 2024-09-20.index.age: Copied (new)
2024/09/20 13:15:09 INFO : 2024-09-20.md5sum: Copied (new)
2024/09/20 13:15:15 INFO : 2024-09-20.data.00087: Copied (new)
2024/09/20 13:16:22 INFO : 2024-09-20.data.00089: Copied (new)
2024/09/20 13:16:32 INFO : 2024-09-20.data.00088: Copied (new)
Transferred: 87.690 GiB / 87.690 GiB, 100%, 23.382 MiB/s, ETA 0s
Transferred: 92 / 92, 100%
Elapsed time: 1h18m54.3s
2024/09/20 13:16:32 INFO : 
Transferred: 87.690 GiB / 87.690 GiB, 100%, 23.382 MiB/s, ETA 0s
Transferred: 92 / 92, 100%
Elapsed time: 1h18m54.3s

When i try to check them, i get the error which i understand, as there is no default md5sums/sha1sums for SFTP and crypt. But to my suprise, even when there is a problem and files are not really checked, commands behaves like it did.

[nikow@thinira delivery]$ rclone -v -P --order-by name check /storage/offload/delivery/2024-09-20/ aleksandria-001-vault:delivery/2024-09-20/ && echo All Ok
2024/09/20 14:03:08 ERROR : No common hash found - not using a hash for checks
2024/09/20 14:03:08 NOTICE: union root 'delivery/2024-09-20': 0 differences found 
2024/09/20 14:03:08 NOTICE: union root 'delivery/2024-09-20': 92 hashes could not be checked
2024/09/20 14:03:08 NOTICE: union root 'delivery/2024-09-20': 92 matching files
Transferred:0 B / 0 B, -, 0 B/s, ETA -
Checks:92 / 92, 100%
Elapsed time: 4.6s
2024/09/20 14:03:08 INFO: 
Transferred:0 B / 0 B, -, 0 B/s, ETA -
Checks:92 / 92, 100% 
Elapsed time: 4.6s 

All Ok

Next command in my workflow was to delete just "checked" files. Oopsie!

Then when i mount the share with copied files and run md5sum -c to check sums. Most files are okey (so i cut out the OKs), but some are corrupted:

[nikow@thinira 2024-09-20]$ md5sum -c 2024-09-20.md5sum
2024-09-20.data.00000: OK
...
2024-09-20.data.00005: OK
md5sum: 2024-09-20.data.00006: Input/output error
2024-09-20.data.00006: FAILED open or read
2024-09-20.data.00007: OK
...
2024-09-20.data.00011: OK
md5sum: 2024-09-20.data.00012: Input/output error
2024-09-20.data.00012: FAILED open or read
2024-09-20.data.00013: OK
...
2024-09-20.data.00070: OK
md5sum: 2024-09-20.data.00071: Input/output error
2024-09-20.data.00071: FAILED open or read
2024-09-20.data.00072: OK
...
2024-09-20.data.00089: OK
md5sum: WARNING: 3 listed files could not be read

And mount spews errors like below:

[nikow@thinira ~]$ mkdir -p -v /tmp/aleksandria-001-vault/ ; rclone -v --vfs-cache-mode writes --vfs-cache-max-size 8G mount aleksandria-001-vault:/ /tmp/aleksandria-001-vaul
t/ 
mkdir: created directory '/tmp/aleksandria-001-vault/' 
2024/09/20 14:00:21 INFO : union root '': poll-interval is not supported by this remote 
2024/09/20 14:00:21 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:00:22 NOTICE: #recycle: Skipping undecryptable dir name: illegal base64 data at input byte 0 
2024/09/20 14:01:21 INFO : vfs cache: cleaned: objects 2 (was 2) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:02:21 INFO : vfs cache: cleaned: objects 3 (was 3) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:03:21 INFO : vfs cache: cleaned: objects 5 (was 5) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:04:21 INFO : vfs cache: cleaned: objects 6 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:05:21 INFO : vfs cache: cleaned: objects 7 (was 7) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:06:21 INFO : vfs cache: cleaned: objects 8 (was 8) in use 0, to upload 0, uploading 0, total size 0 (was 0) 
2024/09/20 14:06:31 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:31 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 2/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:31 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 3/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:32 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 4/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:32 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 5/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:32 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 6/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:32 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 7/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:32 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 8/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:33 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 9/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:33 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 10/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:33 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:33 ERROR : IO error: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:33 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:34 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:34 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 2/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:34 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 3/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:35 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 4/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:35 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 5/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:35 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 6/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:35 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 7/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:36 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 8/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:36 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 9/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:36 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: low level retry 10/10: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:36 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Read error: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:36 ERROR : IO error: failed to authenticate decrypted block - bad password?
2024/09/20 14:06:36 ERROR : delivery/2024-09-20/2024-09-20.data.00006: ReadFileHandle.Release error: file already closed
2024/09/20 14:06:36 ERROR : IO error: file already closed
2024/09/20 14:07:21 INFO  : vfs cache: cleaned: objects 9 (was 9) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/09/20 14:08:21 INFO  : vfs cache: cleaned: objects 10 (was 10) in use 0, to upload 0, uploading 0, total size 0 (was 0)

After manually deleting those files (ending with 6 and 12) i can copy them again with the same command and They is correct (md5sum match). To make sure that reuploaded files are fine - i power cycled both machines - i checked Their md5sums again and reuploaded files are OK, when damaged file (71) remains damaged.

When i reversed name of one of damaged files with command:

[nikow@thinira ~]$ rclone cryptdecode aleksandria-001-vault-007: delivery/2024-09-20/2024-09-20.data.00071 --reverse
delivery/2024-09-20/2024-09-20.data.00071        LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI

I was able to copy file just without any problems from storage. Its a bit bigger than original file, but it's the same size like other working file, so i quess its perfecly fine.

1048576000      2024-09-20.data.00071
1048832032      VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI
1048576000      2024-09-20.data.00072
1048832032      0IlaUkQl2qKbWP_SQ7DDq_1hadIgN2l6KSbTpmpBkK8

As my data is already encrypted by age, i can share both original and encrypted file:
https://nikow.pl/~nikow/rclone_crypt_bug/2024-09-20.data.00071
https://nikow.pl/~nikow/rclone_crypt_bug/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI

You may find passwords in the config section.

Run the command 'rclone version' and share the full output of the command.

rclone v1.68.0
- os/version: fedora 39 (64 bit)
- os/kernel: 6.10.9-100.fc39.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.1
- go/linking: static
- go/tags: none
- 

but i was able to get similar results with

rclone v1.67.0
- os/version: debian 12.7 (64 bit)
- os/kernel: 6.1.0-18-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

SFTP on Synology
Its where i spot the problem. The storage and volumes are not showing any signs of problems. I have integrity on, but to not waste your time on Synology bugs, i spawned Debian Docker Container with openssh installed but without md5sum and sha1sum commands on different machine. Silent corruption happens there too.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -v -P --order-by name --checksum copy /storage/offload/delivery/2024-09-20/ aleksandria-001-vault:delivery/2024-09-20/  
mkdir -p -v /tmp/aleksandria-001-vault/ ; rclone -v --vfs-cache-mode writes --vfs-cache-max-size 8G mount aleksandria-001-vault:/ /tmp/aleksandria-001-vault/

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Passwords in the last config has been left on purpose, to help findout what's wrong there.

[aleksandria-001]
type = sftp
host = 192.168.3.72
pass = XXX
shell_type = unix
md5sum_command = none
sha1sum_command = none

[aleksandria-001-vault-002]
type = crypt
remote = aleksandria-001:vault-002/
password = XXX
password2 = XXX
filename_encoding = base64
suffix = none

[aleksandria-001-vault-003]
type = crypt
remote = aleksandria-001:vault-003/
password = XXX
password2 = XXX
filename_encoding = base64
suffix = none

[aleksandria-001-vault-004]
type = crypt
remote = aleksandria-001:vault-004/
password = XXX
password2 = XXX
filename_encoding = base64
suffix = none

[aleksandria-001-vault-005]
type = crypt
remote = aleksandria-001:vault-005/
password = XXX
password2 = XXX
filename_encoding = base64
suffix = none

[aleksandria-001-vault-006]
type = crypt
remote = aleksandria-001:vault-006/
password = XXX
password2 = XXX
filename_encoding = base64
suffix = none

[aleksandria-001-vault]
type = union
upstreams = aleksandria-001-vault-002: aleksandria-001-vault-003: aleksandria-001-vault-004: aleksandria-001-vault-005: aleksandria-001-vault-006: aleksandria-001-vault-007:
action_policy = mfs
create_policy = mfs
search_policy = mfs

[aleksandria-001-vault-007]
type = crypt
remote = aleksandria-001:vault-007/
password = wZwdT70N-WjvTPD78qgyCkYkk-cR5n9OGyrBsusc0-qexqzSeVNlYPLZvr0p_Vh7bJNKBQLpSqgWzW7cz_sqeTyjG6z9NaqX2FNeiVxxO9SfdlOsrC6vGGCmAf0djII0vej_2-j4LIX_nUemhRalinKXxRsZw1vKOr-nhcvZ1V466d8a41q3sl910CyQ-hBpGGHhKUulPmDMU1j9FGFPzxNyO1PWodBykicMBbRjZw6bno6HxVak8yVSwQ
password2 = b-KPxgNrDLvveScVkTiz6naCzP4MlhUUJ6pCwCPq0yHo1TmTWDcDiQ7X8XGQ51itm8aSbZWrN0ejZu5xv1ecxKrUSr63JAWSoKOgUqKFCRhmPNA2KcUvQcrpD13n5bA10zCIZTLxMPUTwsF-7inCrjOgKYBuhxPRcJg_57HDgRMPyxtjZmgx5pVodE9lbh6lwIVB5uUeXrSQ9rfGm8mGXPWmW18EhcP6hMyaXrH0ItsG6A-cWpYP-l2Hvw
filename_encoding = base64
suffix = none

A log from the command that you were trying to run with the -vv flag

(( I cut there, because character lenght limit, i will past it in the comment. ))

A log from the command that you were trying to run with the -vv flag

[nikow@thinira ~]$ rclone -vvv -P copy aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071 /tmp/2024-09-20.data.00071_copy_log
2024/09/20 15:57:04 DEBUG : rclone: Version "v1.68.0" starting with parameters ["rclone" "-vvv" "-P" "copy" "aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071" "/tmp/2024-09-20.data.00071_copy_log"]
2024/09/20 15:57:04 DEBUG : Creating backend with remote "aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071"
2024/09/20 15:57:04 DEBUG : Using config file from "/home/nikow/.config/rclone/rclone.conf"
2024/09/20 15:57:04 DEBUG : Creating backend with remote "aleksandria-001:vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI"
2024/09/20 15:57:04 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: New connection 192.168.3.21:55318->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 15:57:05 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Shell type "unix" from config
2024/09/20 15:57:05 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Relative path resolved to "/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI"
2024/09/20 15:57:05 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Using root directory "/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg"
2024/09/20 15:57:05 DEBUG : fs cache: adding new entry for parent of "aleksandria-001:vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI", "aleksandria-001:vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg"
2024/09/20 15:57:05 DEBUG : fs cache: adding new entry for parent of "aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071", "aleksandria-001-vault-007:delivery/2024-09-20"
2024/09/20 15:57:05 DEBUG : Creating backend with remote "/tmp/2024-09-20.data.00071_copy_log"
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: Need to transfer - File not found at Destination
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: disabling buffering because destination uses OpenWriterAt
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: write buffer set to 131072
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: Starting multi-thread copy with 16 chunks of size 64Mi with 4 parallel streams
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 4/16 (201326592-268435456) size 64Mi starting
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 1/16 (0-67108864) size 64Mi starting
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 2/16 (67108864-134217728) size 64Mi starting
2024/09/20 15:57:05 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 3/16 (134217728-201326592) size 64Mi starting
2024/09/20 15:57:06 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: New connection 192.168.3.21:55328->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 15:57:06 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 3
2024/09/20 15:57:06 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: New connection 192.168.3.21:55344->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 15:57:06 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: New connection 192.168.3.21:55350->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 15:57:06 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 0
2024/09/20 15:57:10 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 1
2024/09/20 15:57:11 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 2
2024/09/20 15:57:20 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 4/16 (201326592-268435456) size 64Mi finished
2024/09/20 15:57:21 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 5/16 (268435456-335544320) size 64Mi starting
2024/09/20 15:57:22 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 4
2024/09/20 15:57:23 DEBUG : 2024-09-20.data.00071: Reopening on read failure after offset 38076416 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:23 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 2/16 (67108864-134217728) size 64Mi finished
2024/09/20 15:57:24 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 6/16 (335544320-402653184) size 64Mi starting
2024/09/20 15:57:24 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 1/16 (0-67108864) size 64Mi finished
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 7/16 (402653184-469762048) size 64Mi starting
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 6
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: Reopen failed after offset 38076416 bytes read: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 3/16 failed: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 8/16 (469762048-536870912) size 64Mi starting
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 7/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 7
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 8/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 5/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:25 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 5
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 6/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: cancelling transfer on exit
2024/09/20 15:57:26 ERROR : 2024-09-20.data.00071: Failed to copy: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:26 ERROR : Attempt 1/3 failed with 1 errors and: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: Need to transfer - File not found at Destination
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: disabling buffering because destination uses OpenWriterAt
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: write buffer set to 131072
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: Starting multi-thread copy with 16 chunks of size 64Mi with 4 parallel streams
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 4/16 (201326592-268435456) size 64Mi starting
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 2/16 (67108864-134217728) size 64Mi starting
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 3/16 (134217728-201326592) size 64Mi starting
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 1/16 (0-67108864) size 64Mi starting
2024/09/20 15:57:26 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 0
2024/09/20 15:57:27 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 1
2024/09/20 15:57:27 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 3
2024/09/20 15:57:28 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 2
2024/09/20 15:57:34 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 2/16 (67108864-134217728) size 64Mi finished
2024/09/20 15:57:34 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 5/16 (268435456-335544320) size 64Mi starting
2024/09/20 15:57:37 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 4
2024/09/20 15:57:39 DEBUG : 2024-09-20.data.00071: Reopening on read failure after offset 38076416 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:43 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 5/16 (268435456-335544320) size 64Mi finished
2024/09/20 15:57:43 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 6/16 (335544320-402653184) size 64Mi starting
2024/09/20 15:57:45 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 5
2024/09/20 15:57:45 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 1/16 (0-67108864) size 64Mi finished
2024/09/20 15:57:45 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 7/16 (402653184-469762048) size 64Mi starting
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: Reopen failed after offset 38076416 bytes read: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 3/16 failed: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 8/16 (469762048-536870912) size 64Mi starting
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 6/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 7
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 4/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 8/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 6
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 7/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:46 DEBUG : 2024-09-20.data.00071: multi-thread copy: cancelling transfer on exit
2024/09/20 15:57:47 ERROR : 2024-09-20.data.00071: Failed to copy: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:47 ERROR : Attempt 2/3 failed with 1 errors and: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: Need to transfer - File not found at Destination
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: disabling buffering because destination uses OpenWriterAt
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: write buffer set to 131072
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: Starting multi-thread copy with 16 chunks of size 64Mi with 4 parallel streams
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 4/16 (201326592-268435456) size 64Mi starting
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 3/16 (134217728-201326592) size 64Mi starting
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 2/16 (67108864-134217728) size 64Mi starting
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 1/16 (0-67108864) size 64Mi starting
2024/09/20 15:57:47 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 0
2024/09/20 15:57:48 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 1
2024/09/20 15:57:48 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 2
2024/09/20 15:57:49 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 3
2024/09/20 15:57:51 DEBUG : 2024-09-20.data.00071: Reopening on read failure after offset 38076416 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:53 DEBUG : 2024-09-20.data.00071: Reopen failed after offset 38076416 bytes read: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:53 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 3/16 failed: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:53 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 5/16 (268435456-335544320) size 64Mi starting
2024/09/20 15:57:54 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 2/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:54 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 1/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:54 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 4/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:54 DEBUG : 2024-09-20.data.00071.300f529d.partial: writing chunk 4
2024/09/20 15:57:55 DEBUG : 2024-09-20.data.00071: multi-thread copy: chunk 5/16 failed: multi-thread copy: failed to write chunk: context canceled
2024/09/20 15:57:55 DEBUG : 2024-09-20.data.00071: multi-thread copy: cancelling transfer on exit
2024/09/20 15:57:55 ERROR : 2024-09-20.data.00071: Failed to copy: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
2024/09/20 15:57:55 ERROR : Attempt 3/3 failed with 1 errors and: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?
Transferred:      606.062 MiB / 606.062 MiB, 100%, 11.860 MiB/s, ETA 0s
Errors:                 1 (retrying may help)
Elapsed time:        50.7s
2024/09/20 15:57:55 INFO  : 
Transferred:      606.062 MiB / 606.062 MiB, 100%, 11.860 MiB/s, ETA 0s
Errors:                 1 (retrying may help)
Elapsed time:        50.7s

2024/09/20 15:57:55 DEBUG : 42 go routines active
2024/09/20 15:57:55 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Closing 4 unused connections
2024/09/20 15:57:55 NOTICE: Failed to copy: multi-thread copy: failed to write chunk: failed to authenticate decrypted block - bad password?

And i tried without multi-thread-streams:

[nikow@thinira ~]$ rclone -vvv -P --multi-thread-streams 0 copy aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071 /tmp/2024-09-20.data.00071_copy_log
2024/09/20 15:58:06 DEBUG : rclone: Version "v1.68.0" starting with parameters ["rclone" "-vvv" "-P" "--multi-thread-streams" "0" "copy" "aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071" "/tmp/2024-09-20.data.00071_copy_log"]
2024/09/20 15:58:06 DEBUG : Creating backend with remote "aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071"
2024/09/20 15:58:06 DEBUG : Using config file from "/home/nikow/.config/rclone/rclone.conf"
2024/09/20 15:58:06 DEBUG : Creating backend with remote "aleksandria-001:vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI"
2024/09/20 15:58:06 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: New connection 192.168.3.21:60542->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 15:58:07 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Shell type "unix" from config
2024/09/20 15:58:07 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Relative path resolved to "/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI"
2024/09/20 15:58:07 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Using root directory "/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg"
2024/09/20 15:58:07 DEBUG : fs cache: adding new entry for parent of "aleksandria-001:vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI", "aleksandria-001:vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg"
2024/09/20 15:58:07 DEBUG : fs cache: adding new entry for parent of "aleksandria-001-vault-007:delivery/2024-09-20/2024-09-20.data.00071", "aleksandria-001-vault-007:delivery/2024-09-20"
2024/09/20 15:58:07 DEBUG : Creating backend with remote "/tmp/2024-09-20.data.00071_copy_log"
2024/09/20 15:58:07 DEBUG : 2024-09-20.data.00071: Need to transfer - File not found at Destination
2024/09/20 15:58:18 DEBUG : 2024-09-20.data.00071: Reopening on read failure after offset 172294144 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:19 DEBUG : 2024-09-20.data.00071: Reopen failed after offset 172294144 bytes read: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:19 NOTICE: 2024-09-20.data.00071.300f529d.partial: Removing partially written file on error: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:19 ERROR : 2024-09-20.data.00071: Failed to copy: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:19 ERROR : Attempt 1/3 failed with 1 errors and: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:19 DEBUG : 2024-09-20.data.00071: Need to transfer - File not found at Destination
2024/09/20 15:58:30 DEBUG : 2024-09-20.data.00071: Reopening on read failure after offset 172294144 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:30 DEBUG : 2024-09-20.data.00071: Reopen failed after offset 172294144 bytes read: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:30 NOTICE: 2024-09-20.data.00071.300f529d.partial: Removing partially written file on error: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:31 ERROR : 2024-09-20.data.00071: Failed to copy: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:31 ERROR : Attempt 2/3 failed with 1 errors and: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:31 DEBUG : 2024-09-20.data.00071: Need to transfer - File not found at Destination
2024/09/20 15:58:47 DEBUG : 2024-09-20.data.00071: Reopening on read failure after offset 172294144 bytes: retry 1/10: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:48 DEBUG : 2024-09-20.data.00071: Reopen failed after offset 172294144 bytes read: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:48 NOTICE: 2024-09-20.data.00071.300f529d.partial: Removing partially written file on error: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:48 ERROR : 2024-09-20.data.00071: Failed to copy: failed to authenticate decrypted block - bad password?
2024/09/20 15:58:48 ERROR : Attempt 3/3 failed with 1 errors and: failed to authenticate decrypted block - bad password?
Transferred:      492.938 MiB / 492.938 MiB, 100%, 10.785 MiB/s, ETA 0s
Errors:                 1 (retrying may help)
Elapsed time:        42.0s
2024/09/20 15:58:48 INFO  : 
Transferred:      492.938 MiB / 492.938 MiB, 100%, 10.785 MiB/s, ETA 0s
Errors:                 1 (retrying may help)
Elapsed time:        42.0s

2024/09/20 15:58:48 DEBUG : 15 go routines active
2024/09/20 15:58:48 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/VOOvWYy3ejUIu5GVqcXcdrj2eDU7NKzRC14PuMEx5WI: Closing 1 unused connections
2024/09/20 15:58:48 NOTICE: Failed to copy: failed to authenticate decrypted block - bad password?
[nikow@thinira ~]$

What is the purpose of such policies? I think they can produce truly random results….

MFS means most free space. Generally i am adding those vault segments as i need them, so it creates files only on most recently added, empty yet, volume in union. It's a pain to resize volumes on Synology (especially downsizing them), so union works as workaround for space limits. All volumes in the union provides sizes.

[nikow@thinira ~]$  rclone -vv about aleksandria-001-vault:
2024/09/20 17:10:09 DEBUG : rclone: Version "v1.68.0" starting with parameters ["rclone" "-vv" "about" "aleksandria-001-vault:"]
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault:"
2024/09/20 17:10:09 DEBUG : Using config file from "/home/nikow/.config/rclone/rclone.conf"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault-007:"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault-002:"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault-004:"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault-003:"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault-005:"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001-vault-006:"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001:vault-007/"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001:vault-002/"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001:vault-004/"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001:vault-006/"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001:vault-005/"
2024/09/20 17:10:09 DEBUG : Creating backend with remote "aleksandria-001:vault-003/"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: New connection 192.168.3.21:56384->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: New connection 192.168.3.21:56380->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: New connection 192.168.3.21:56394->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: New connection 192.168.3.21:56410->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: New connection 192.168.3.21:56388->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: New connection 192.168.3.21:56408->192.168.3.72:22 to "SSH-2.0-OpenSSH_8.2"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: Shell type "unix" from config
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: Relative path resolved to "/vault-002"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: Using root directory "/vault-002"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: Shell type "unix" from config
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: Relative path resolved to "/vault-007"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: Using root directory "/vault-007"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: Shell type "unix" from config
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: Relative path resolved to "/vault-003"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: Using root directory "/vault-003"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: Shell type "unix" from config
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: Relative path resolved to "/vault-006"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: Using root directory "/vault-006"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: Shell type "unix" from config
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: Relative path resolved to "/vault-004"
2024/09/20 17:10:10 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: Using root directory "/vault-004"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: Shell type "unix" from config
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: Relative path resolved to "/vault-005"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: Using root directory "/vault-005"
2024/09/20 17:10:11 DEBUG : union root '': actionPolicy = *policy.Mfs, createPolicy = *policy.Mfs, searchPolicy = *policy.Mfs
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: Server has VFS statistics extension
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: About path "/vault-002"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: Server has VFS statistics extension
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: About path "/vault-003"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: Server has VFS statistics extension
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: About path "/vault-004"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: Server has VFS statistics extension
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: About path "/vault-005"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: Server has VFS statistics extension
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: About path "/vault-006"
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: Server has VFS statistics extension
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: About path "/vault-007"
Total:   27.984 TiB
Used:    24.484 TiB
Free:    3.501 TiB
2024/09/20 17:10:11 DEBUG : 58 go routines active
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-006/: Closing 1 unused connections
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-004/: Closing 1 unused connections
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-002/: Closing 1 unused connections
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/: Closing 1 unused connections
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-003/: Closing 1 unused connections
2024/09/20 17:10:11 DEBUG : sftp://nikow@192.168.3.72:22/vault-005/: Closing 1 unused connections

All files, as They should, appear on volume aleksandria-001-vault-007:. Accessing that volume directly causes same results - i can read same files with same results - most files are fine. Reuplaoded files are fine. Damaged files stays damaged.

this is create_policy = mfs description only. Two others have different purpose.

I have strong feeling that the issue is not with crypt but with two other policies combined with sftp and the way how it transfers files using partial step.

Could you try:

action_policy = epall
create_policy = mfs
search_policy = ff

and see if your corruptions happen as well?

You can get crypt to pass damaged blocks through as zeros using

  --crypt-pass-bad-blocks     If set this will pass bad blocks through as all 0

This means you will be able to recover some of the file

I tried this on your data (thanks for sending it)

There is one bad block in the data from offset 0x0a450000 to 0x0a45ffff

This is exactly 64k long (one crypt block) so if you use the above flag you'll get back all but 64k of the file.

Corruptions which affect exactly 1 crypt block are most likely to be memory or disk corruptions. If I can figure out how to re-encrypt the file with the same nonce then I can see what the data should have been and we can see the pattern of the corruption which will be informative.

I will have a go at this later.

Did you use the same network for this test? I guess it could be some kind of network corruption. This shouldn't be possible if the md5sum commands are set in the sftp backend but I see you have md5sum_command = none so rclone won't have been able to check the integrity of the upload :frowning: If you had these set then rclone would have spotted the corruption in the original rclone copy.

Note that rclone cryptcheck rather than rclone check is the tool to look for corruptions in your crypt data but you'll need the sftp backend to support md5sums

My suspects for this problem

  • network corruption
  • disk corruption - probably ruled out by different hardware
  • memory bit flips - probably ruled out by different hardware
  • bug in crypt
  • bug in sftp backend

Let me know what you think?

Thank you for your time, i adjusted my settings to your recommendations, but it's happening with coping directly to rclone.

Can you explain more please? I didn't understand what you mean. Thanks.

@ncw, Excuse me confusion, i was responding to @kapitainsky. I applied recommended by him changes to my union, but it didn't affect the issue at all - sadly.

Thank you for your time. Unfortunately, the age system is using a stream cipher, which means that losing one block results in losing all subsequent blocks as well. I commented out the lines for md5sum = none and sha1sum = none. After doing so, they reappeared again. I discovered that on Synology, I need to enable the home directory service by going to Control Panel -> Users & Groups -> Advanced, and then checking "Enable home directory service" for the md5sum and sha1sum commands to function properly. I commented those lines again and now commands can execute (and are added to config to use it), but there seems to be some confusion because Synology uses different paths for SFTP and SSH commands. Over SSH i see whole linux system working on the box, but SFTP exposes only "shared folders".

Log attached in cryptcheck_confused_by_synology.log (89.6 KB) file.

Copy over crypt still does not use checksums, what you can see in copy_still_ignores_checksums.log (16.9 KB).

And Synology without crypt, but with md5sums enabled fails too, because he tries to call md5sum for wrong path as you can see in synology_does_not_like_sftp_with_md5sums.log (6.2 KB). I found solution for this issue in docs about rclone SFTP on Synology. I can use override for sftp path, but the problem with that is that every "shared folder" sits on different volume, so i have no clue how to merge it with crypt into working solution. Giving this --sftp-path-override to my original copy seems to be ignored by crypt backend as we can see in crypt_backend_ignores_sftp_path_override.log (17.5 KB).

But the --sftp-path-override works for cryptcheck command, so we are closer to solution, maybe: cryptcheck_respects_path_overwrite.log (98.5 KB)

About the network, i think it should be ruled out by fact that we have no connection related error, but i am going to test it tomorrow just to be sure. I have second router and different ethernet cables somewhere around.

I think if SFTP module was causing this issue coping files without crypt will cause some errors, but when i copy files without using crypt backend, copied files matches the manual md5sum check. This confirms that my memory and hard drives are fine, i think.

Can i somehow tell rclone to download files after copy to confirm that They're okey?

The crypt backend itself verifies the transfer if checksums are available in the same way that rclone cryptcheck does. It compares the checksum of the encrypted file with the checksum of the encrypted data to see if it is correct.

This doesn't mean that md5sums are available for the unencrypted data though.

If you want a 100% check then use rclone check --download source dest this streams the files from source and dest and checks they are byte identical.

I re-encrypted your original with the same encryption parameters and then diffed that with the corrupted encrypted file to see exactly what the corruption was

--- uncorrupted
+++ corrupted
@@ -10772608,7 +10772608,7 @@
 0a4607f0: 35c4 6f36 9f34 ac86 63d8 8f5e 8f95 8a09  5.o6.4..c..^....
 0a460800: b05e a7cc 1cc7 7abb 01e8 bcb5 37f4 62d0  .^....z.....7.b.
 0a460810: b727 309b 341d b962 9997 2b3e 0187 7959  .'0.4..b..+>..yY
-0a460820: 6f44 35f3 cf5e 5d0b 0c68 c9a3 ec4f 009e  oD5..^]..h...O..
+0a460820: 6f44 35fb cf5e 5d0b 0c68 c9a3 ec4f 009e  oD5..^]..h...O..
 0a460830: aad6 9e48 2ea9 cd72 003a 8c26 1f63 b381  ...H...r.:.&.c..
 0a460840: 735a 7e45 a825 aca2 a18d 5ffd ee60 2ced  sZ~E.%...._..`,.
 0a460850: 8f40 8e03 12c0 1715 f5c4 0ed1 c928 9296  .@...........(..

So what happened is a hex digit 3 binary 0011 got changed to a hex digit b binary 1011.

So a single bit got flipped from 0 -> 1

This is very unlikely to be a software bug. Software would corrupt things on a byte or word basis.

So I'd say we are looking at one of

  • network corruption
  • disk corruption
  • memory bit flips

The most common of those is memory bit flips.

For network corruption to be a problem the TCP layer and the SFTP layer (both of which have checksums / MACs) would have to not notice the problem. That is quite unlikely I think.

Disk corruptions tend to come with other signs from the disk that it is unwell like uncorrectable sectors. They can be caused by a bad SATA cable, however most SATA traffic is protected with a checksum now-a-days I think.

So my money would be on memory corruption somewhere. Maybe you got a bad batch of RAM? Have you tried memtest86 on the machines in use? Or maybe you've got a local generator of something which upsets RAM (eg unstable power or alpha particles(!) or RF interference)

Here is ChatGPTs summary of causes of bit flips in RAM with likelihood estimate

  • Cosmic Rays and Radiation: Unlikely
  • Manufacturing Defects: Unlikely
  • Electromagnetic Interference (EMI): Unlikely
  • Electrical Noise or Power Fluctuations: Moderate
  • Overheating: Moderate
  • Aging of RAM Modules: More Likely
  • Overclocking: More Likely (in misconfigured systems)

Memset86 will detect any of those problems.

Following this with interest since I've had similar errors in the past with crypt remotes more than once (eg see here), which I've never been able to run down.

IME Memset86 will detect only the most egregious errors -- the subtler ones will go undetected, as have happened with me (in non-rclone-related situations) even after a 3-day burn-in. What I use is the age-old dledford memtest script, it has always detected the subtler issues in these same situations in less than 24 hours: Linux hardware memory test script

First time I've ever heard of that -- AFAIK, given nominal operating conditions (no overheating, no ESD, etc) RAM chips are supposed to be nearly eternal -- perhaps ChatGPT is hallucinating or confusing it with flash memory?

It is real shame that ECC memory never became mainstream. Linus Torvalds has his theory that it is all evil Intel marketing fault:)

I would never run serious server dealing with anything more critical than cats' pictures storage without ECC however for laptop I lost my hope that it will ever become available. I am 99.99% sure that I had flip bits situations - with quality hardware they are extremely rare IMO but also extremely frustrating when they happen.

Thank you for your patience and time. I ordered new RAM sticks, because it will not hurt to check even when memtests passes.

I managed to make md5sum working with Synology, even when i need to copy-paste main config with path_override for every volume, it am happy that it's checked now. Example config (and log) attached below for every new soul who will happen to find this topic. Is there a way to tell rclone "use aleksandria-001 config + path-override option?" without coping it again and again?

2024/09/22 21:04:49 DEBUG : Encrypted drive 'aleksandria-001-vault-007:/delivery/2024-09-20/': Waiting for transfers to finish
2024/09/22 21:05:29 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg: Shell path redirected to "/volume2/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/00KEJcZ7jmtGkUx_eJchxpudxWZSTkC5Mrab3LLNEsD3SDnmrVbPBEyhN0eKJzR0" with option path_override
2024/09/22 21:05:29 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg: Running remote command: md5sum /volume2/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/00KEJcZ7jmtGkUx_eJchxpudxWZSTkC5Mrab3LLNEsD3SDnmrVbPBEyhN0eKJzR0
2024/09/22 21:05:33 DEBUG : sftp://nikow@192.168.3.72:22/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg: Remote command result: 4c92e18eb85f4807b9779995a7303ec0  /volume2/vault-007/LnuNH6iwxB6isHcP-CVmrg/ujV4F-YzGMDcr_AjOyVmAg/00KEJcZ7jmtGkUx_eJchxpudxWZSTkC5Mrab3LLNEsD3SDnmrVbPBEyhN0eKJzR0
2024/09/22 21:05:33 DEBUG : 00KEJcZ7jmtGkUx_eJchxpudxWZSTkC5Mrab3LLNEsD3SDnmrVbPBEyhN0eKJzR0: Parsed hash: 4c92e18eb85f4807b9779995a7303ec0
2024/09/22 21:05:33 DEBUG : 2024-09-20.data.00071.9b6709a5.partial: md5 = 4c92e18eb85f4807b9779995a7303ec0 OK
2024/09/22 21:05:33 DEBUG : 2024-09-20.data.00071.9b6709a5.partial: renamed to: 2024-09-20.data.00071
2024/09/22 21:05:33 INFO  : 2024-09-20.data.00071: Copied (new)

Config:

[aleksandria-001-volume2]
type = sftp
host = XXX
pass = XXX
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
path_override = @/volume2/

[aleksandria-001-vault-007]
type = crypt
remote = aleksandria-001-volume2:vault-007/
password = XXX
password2 = XXX
filename_encoding = base64
suffix = none

Thank you for check --download, it does job like i expected. It detects corruption correctly and changes exit code to error.

I think you want to be aware that i happen to have same errors for OVH VPSes, when i am transfering around 200G of age encrypted files between two hosts. It happens for sftp + crypt combo, but They do not happen for just SFTP. But with md5sum hashes check, it's not big issue anymore.

Let us know how you get on.

Great

You've put it in the config file which looks good. What do you mean "withot copying it again and again?" You can copy the config file between machines.

Excellent - thank you for reporting that.

That is very suggestive of a crypt problem isn't it. Its not impossible of course. If crypt is corrupting a single bit every now and again then I'd guess the problem is something weird.

The backup files seem to be 1048576000 bytes long according to the one you sent.

I wrote a little simulator of what I think the problem is

#!/bin/bash
# test crypted sftp transfers

set -e

# encrypted sftp remote
remote="nikowcrypt:"

test_dir="/tmp/10Gfiles"
if [ ! -e "$test_dir" ]; then
    echo "Writing test data"
    mkdir $test_dir
    for i in $(seq 100001 100010); do
        dd if=/dev/urandom of=${test_dir}/$(date -I).data.$i bs=1048576 count=1000
    done
fi

set -e

for iteration in $(seq 100); do
    echo "------------------------------------------------------------"
    echo "Starting iteration $iteration"
    rclone copy -vv --retries 1 --low-level-retries 1 $test_dir $remote
    rclone check -vv --download $test_dir $remote
    rclone delete -vv $remote
    echo "All OK iteration $iteration"
done

Could you have an experiment with that - change $remote or anything else and see if you can replicate the problem?

I ran it here from one user to another on my laptop for 100G or so of transfers without seeing a problem (so over localhost rather than the network). If it replicates the problem for you on OVH hosts then I could try it there if you give me full details of what you have setup.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.