After transferring all my files from a combination of mounts based on Google Drive and OneDrive to Backblaze B2 (first by copying from each of the folders using rclone sync and then doing rclone move from the Union mount that united them to be extra sure), I have been able to mount and read all of my files successfully from the new B2 backend, but now whenever I try to actually open one of them from the mountpoint in which they are located, I get back the error message "vfs reader: failed to write to cache file: not an encrypted file - bad magic string".
Even though I can see the files only after mounting them with their correct names and sizes, indicating that not only are they still encrypted, they are using the same password I had defined.
Attempting to create a test file on the mount, waiting for it to upload, then having it downloaded to my user's root folder with: sudo rclone copy EncFS:/data/test.txt . --config=/opt/rclone/rclone.conf --metadata -vv
Works correctly and I'm able to even read the contents of the file, so everything seems to be a problem only when dealing with the mount option in particular when using Backblaze.
Run the command 'rclone version' and share the full output of the command.
Which cloud storage system are you using? (eg Google Drive)
RClone Crypt mount over a Backblaze B2 bucket, accessed as the only mount point in a mergerfs mount to allow for the use of hardlinks (accessing files directly on the EncFS: mount leads to the same error)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Try to mount the same remote on other computer. It will confirm whether issue is with corrupted encrypted data or with your specific hardware configuration.
Okay, just tried it with a WSL host of the same Ubuntu LTS version.
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
Linux UbuntuLXC 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 x86_64 x86_64 GNU/Linux
After attempting to install rclone and succesfully mounting the filesystem, the same problem occurred. However, I noticed that for some reason, the files that appear in the folder are all 1KB in size, and there are copies of them with a different extension that seem to actually hold the file's information, though still encrypted.
Here are the logs of the attempt on my laptop. Is there anything else I could try to look up?
It is very weird issue I have to admit and I struggle to make sense of it.
Let's try to get to the bottom of this.
Download random file, e.g. this one with problems according to your log:
2023/07/07 08:09:33 INFO : data/media/tv/Shiki/banner.jpg: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: not an encrypted file - bad magic string
These copies are probably side car files from chunker as you use hash_type = sha1all - if this is what you intended it is no problem. But also means you have to access backblaze remote using chunker. As any file bigger than 2Gi is split into parts. 1KB files contain metadata only.
🚀 sudo rclone --config=/opt/rclone/rclone.conf copy EncFS:data/media/tv/Shiki/banner.jpg . -vv
2023/07/07 09:31:44 DEBUG : rclone: Version "v1.63.0" starting with parameters ["rclone" "--config=/opt/rclone/rclone.conf" "copy" "EncFS:data/media/tv/Shiki/banner.jpg" "." "-vv"]
2023/07/07 09:31:44 DEBUG : Creating backend with remote "EncFS:data/media/tv/Shiki/banner.jpg"
2023/07/07 09:31:44 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2023/07/07 09:31:44 DEBUG : Creating backend with remote "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo/229.nmzzqD.vBs"
2023/07/07 09:31:44 DEBUG : fs cache: adding new entry for parent of "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo/229.nmzzqD.vBs", "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo"
2023/07/07 09:31:44 DEBUG : Creating backend with remote "."
2023/07/07 09:31:44 DEBUG : fs cache: renaming cache item "." to be canonical "/home/toasterdev"
2023/07/07 09:31:44 DEBUG : banner.jpg: Need to transfer - File not found at Destination
2023/07/07 09:31:45 ERROR : banner.jpg: Failed to copy: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 ERROR : Attempt 1/3 failed with 1 errors and: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 DEBUG : banner.jpg: Need to transfer - File not found at Destination
2023/07/07 09:31:45 ERROR : banner.jpg: Failed to copy: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 ERROR : Attempt 2/3 failed with 1 errors and: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 DEBUG : banner.jpg: Need to transfer - File not found at Destination
2023/07/07 09:31:45 ERROR : banner.jpg: Failed to copy: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 ERROR : Attempt 3/3 failed with 1 errors and: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 1 (retrying may help)
Elapsed time: 1.4s
2023/07/07 09:31:45 DEBUG : 9 go routines active
2023/07/07 09:31:45 Failed to copy: failed to open source object: not an encrypted file - bad magic string
So far no dice, the file seems to not have been copied at all.
🚀 ls banner.jpg
ls: cannot access 'banner.jpg': No such file or directory
As you said it seems that the file is a sidecar file containing only the metadata, though fortunately it appears that it was copied successfully when using rclone copy instead of reading it out of the mount.
If your data on Backblaze should be encrypted you will have to delete it and re-upload again. I suggest next time test before with small amount of data and if not sure throw question on the forum. You have some encryption/chunker mess at the moment.
and mount EncFS-test. If this works it means your data is encrypted but chunker metadata is not which is not tragedy. Still you should check carefully your data as if some is not chunked it will fail to be read too. If you use chunker you have to use it ALL the time. Or things wont work as you expect.
I'm pretty confused too to be honest. The way I attempted to copy things over to take advantage of OneDrive's checksum flag and Google Drive's fast-list was to just do the following commands one after the other (all in all it tooks about four days to copy everything over)
After everything copied over without reported errors, I attempted to mount Backblaze directly, and seeing that the filenames looked alright, I left it running without much issue until I noticed the problem that I described on the first post.
Chunker was indeed involved on the upload to the Union: remote, but not in any of the sync or move operations, though I'm not sure where I did go wrong when doing all of this.
I see. As you said this mount did indeed work. So all's not lost thankfully. Seriously, thanks for that.
If I were to do a move operation to a new bucket from the EncFS-test configuration, would I be able to have everything encrypted, ideally while removing the need for Chunker?
actually wait... All chunked data (files larger than 2GB) wont be copied server side - they will be downloaded and uploaded again. There is NO other option I am afraid. All files smaller than 2GB will be copied server side. I guess you pay for downloads so maybe better just upload everything again? But properly:)
I see. Fortunately downloads go through Cloudflare, so that isn't too much of a problem thankfully. If I'm understanding correctly, the command would end up being:
At first: sudo rclone --config=/opt/rclone/rclone.conf copy EncFS-test: EncFS-NEW: --server-side-across-configs --dry-run
And if everything goes smoothly: sudo rclone --config=/opt/rclone/rclone.conf copy EncFS-test: EncFS-NEW: --server-side-across-configs
Just to be sure, given that most files will probably be indeed over 2GB, is there any tweaking I might attempt to copy things faster? I'm running a 4 core Intel (i5-6600T) with 64GB of RAM, of which I can spare about 8GB just to this task, and my internet connection is 1Gbps Down/300Mbps Up.
I'm thinking perhaps altering the number of transfers and checkers, enabling --fast-list, plus enabling the --metadata and --checksum tags to verify, but I'm not sure if there's anything else I should take into account.
Mostly because I had prepared a systemd script to run the mount as its own user and the config file is read-protected to the rclone user, aside from that there's no need as you said. If this goes smoothly, I'll try to figure out how to tangle it together again to remove the need for sudo.
That's probably a good idea, starting small and going from there.
For posterity's sake, this is the command I'm using to test first: