vfs cache: failed to transfer file from cache to remote: add file error 8
Pretty sure that long filenames are the culprit, I "wrap" chunker into crypt on the Mail.ru cloud, the cloud does support long file- and path-names. I couldn't find a workaround yet, but one post contained a reply suggesting wrapping the other way around (crypt INSIDE chunker), I did already have the same problems with crypt only, though. I also saw mentions of stuff like cryptomator and cryfs in another topic, but someone said there that there would be pitfalls, without explaining, which ones exactly. I won't mind using one of those inside the mounted cloud, if there are no known problems at the moment. I would definitely prefer using rclone and (if still necessary for files over the size limit) chunker instead the official Disk:O app, which is a no-go for me at the moment. I couldn't find alternatives to rclone and Disk:O for mail.ru. Though there was a Total Commander plugin that worked for me months ago; it had issues, but those 3rd-party cryptoFS wrappers might actually solve them, if they'd work...
Run the command 'rclone version' and share the full output of the command.
rclone v1.58.0
- os/version: Microsoft Windows 10 Pro 21H2 (64 bit)
- os/kernel: 10.0.19044.1682 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.17.8
- go/linking: dynamic
- go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
Mail.ru
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone mount pt-mru-enc-chunk: P: --vfs-cache-mode full --cache-dir "O:\.rclone-cache" --stats-file-name-length 0
2022/04/30 14:25:50 EME operates on 1 to 128 block-cipher blocks, you passed 513
2022/04/30 14:25:50 EME operates on 1 to 128 block-cipher blocks, you passed 257
2022/04/30 14:25:50 EME operates on 1 to 128 block-cipher blocks, you passed 129
// pt-mru-enc
maxFileLength = 143
If there's a norename option for chunker and a metadata file is kept, why wasn't keeping names in the metadata file and making unique short names for each file to be stored in parent filesystem implemented? I saw discussion about that very type of idea from years ago...
I do often have very long paths and filenames, for me to understand the differences between project versions and generally to keep a good overview (keywords to be found by the Everything app if I search for certain types of files/dirs, for example). And some stuff in my sample/instrument libraries came this way, I preferred so far to keep directory clones instead of archives for the case of data corruption and because I used Disk:O before and couldn't upload original archives over 2GB per file. If one already has 2 free 1TB clouds, one wants to properly use them...
Oh... I didn't notice you telling me to do tests with different modes...
base64:
2022/04/30 16:55:39 EME operates on 1 to 128 block-cipher blocks, you passed 513
2022/04/30 16:55:39 EME operates on 1 to 128 block-cipher blocks, you passed 257
2022/04/30 16:55:39 EME operates on 1 to 128 block-cipher blocks, you passed 129
// pt-mru-enc
maxFileLength = 175
base32:
C:\soft\rclone>rclone test info --check-length pt-mru-enc:deleteme
2022/04/30 17:00:35 EME operates on 1 to 128 block-cipher blocks, you passed 513
2022/04/30 17:00:35 EME operates on 1 to 128 block-cipher blocks, you passed 257
2022/04/30 17:00:35 EME operates on 1 to 128 block-cipher blocks, you passed 129
// pt-mru-enc
maxFileLength = 143
175 is better, but still not enough, 255 for the leaf might be enough for now, but I would prefer limits set by Windows 10 ("hardly any" ).