Unable to open encrypted files uploaded to Backblaze B2 when mounted

What is the problem you are having with rclone?

After transferring all my files from a combination of mounts based on Google Drive and OneDrive to Backblaze B2 (first by copying from each of the folders using rclone sync and then doing rclone move from the Union mount that united them to be extra sure), I have been able to mount and read all of my files successfully from the new B2 backend, but now whenever I try to actually open one of them from the mountpoint in which they are located, I get back the error message "vfs reader: failed to write to cache file: not an encrypted file - bad magic string".

Even though I can see the files only after mounting them with their correct names and sizes, indicating that not only are they still encrypted, they are using the same password I had defined.

Attempting to create a test file on the mount, waiting for it to upload, then having it downloaded to my user's root folder with:
sudo rclone copy EncFS:/data/test.txt . --config=/opt/rclone/rclone.conf --metadata -vv
Works correctly and I'm able to even read the contents of the file, so everything seems to be a problem only when dealing with the mount option in particular when using Backblaze.

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.0
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.16-3-pve (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.5
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

RClone Crypt mount over a Backblaze B2 bucket, accessed as the only mount point in a mergerfs mount to allow for the use of hardlinks (accessing files directly on the EncFS: mount leads to the same error)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

sudo rclone mount EncFS:/ /mnt/mergerfs --config=/opt/rclone/rclone.conf --allow-other=true --attr-timeout=8700h --vfs-cache-mode full --cache-dir=/var/cache/rclone --vfs-cache-max-age 8760h --vfs-cache-max-size 250G --umask=000 --user-agent=unionfs --syslog --human-readable --metadata -vv --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

The rclone config contents with secrets removed.

[ToasterDEV-OneDrive]
type = onedrive
token = {"access_token":"REDACTED","expiry":"2023-07-02T19:28:54.045634378-06:00"}
drive_id = REDACTED
drive_type = personal
chunk_size = 250M

[ToasterDEV-GDrive]
type = drive
client_id = REDACTED
client_secret = REDACTED
scope = drive
root_folder_id = REDACTED
token = {"access_token":"REDACTED","expiry":"2023-07-02T19:28:55.440567273-06:00"}
chunk_size = 512Mi
team_drive =
use_trash = false
acknowledge_abuse = true

[Local]
type = local

[Union]
type = union
upstreams = ToasterDEV-OneDrive:Union ToasterDEV-GDrive: Lizita63-OneDrive:Union Flanch1942-OneDrive:Union
action_policy = lus
create_policy = lus
search_policy = lus
cache_time = 3600

[Chunker]
type = chunker
remote = Local:/mnt/mergerfs
hash_type = sha1all
chunk_size = 2Gi

[EncFS]
type = crypt
remote = Backblaze:RCloneEncFS
filename_encryption = obfuscate
directory_name_encryption = true
password = REDACTED
password2 = REDACTED

[Lizita63-OneDrive]
type = onedrive
token = {"access_token":"REDACTED","expiry":"2023-07-02T19:28:54.702585575-06:00"}
drive_id = e9c286f9c4d0f78a
drive_type = personal
chunk_size = 250M

[Lizita63-GDrive]
type = drive
client_id = REDACTED
client_secret = REDACTED
scope = drive
root_folder_id = REDACTED
token = {"access_token":"REDACTED","expiry":"2023-06-22T02:12:28.294342395-06:00"}
chunk_size = 512Mi
team_drive =
use_trash = false
acknowledge_abuse = true

[Flanch1942-OneDrive]
type = onedrive
token = {"access_token":"REDACTED","expiry":"2023-07-02T19:28:54.835715734-06:00"}
drive_id = 8559f24b32b9889d
drive_type = personal
chunk_size = 250M

[Flanch1942-GDrive]
type = drive
client_id = REDACTED
client_secret = REDACTED
scope = drive
root_folder_id = REDACTED
token = {"access_token":"REDACTED","expiry":"2023-06-22T02:15:33.416342143-06:00"}
chunk_size = 512Mi
team_drive =
use_trash = false
acknowledge_abuse = true

[MergerFS]
type = local
copy_links = true
links = true

[Backblaze]
type = b2
account = REDACTED
key = REDACTED
download_url = https://static.myredacteddomain.tld
upload_cutoff = 5Gi
chunk_size = 5Gi
memory_pool_use_mmap = true

A log from the command with the -vv flag

RClone Logs (github.com)

I would suspect problem with fuse.

Are you using ProxmosVE kernel on Ubuntu?

Try to mount the same remote on other computer. It will confirm whether issue is with corrupted encrypted data or with your specific hardware configuration.

Kind of. I'm running Ubuntu under and LXC container in a Proxmox 8 host.

LXC Container:

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.2 LTS
Release:        22.04
Codename:       jammy

Linux UbuntuLXC 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 x86_64 x86_64 GNU/Linux

Host:

pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-3-pve)

Linux proxmox 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 GNU/Linux

Okay, just tried it with a WSL host of the same Ubuntu LTS version.

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.2 LTS
Release:        22.04
Codename:       jammy

Linux UbuntuLXC 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 x86_64 x86_64 GNU/Linux

After attempting to install rclone and succesfully mounting the filesystem, the same problem occurred. However, I noticed that for some reason, the files that appear in the folder are all 1KB in size, and there are copies of them with a different extension that seem to actually hold the file's information, though still encrypted.

Here are the logs of the attempt on my laptop. Is there anything else I could try to look up?

Thanks for the help so far!

It is very weird issue I have to admit and I struggle to make sense of it.

Let's try to get to the bottom of this.

Download random file, e.g. this one with problems according to your log:

2023/07/07 08:09:33 INFO : data/media/tv/Shiki/banner.jpg: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: not an encrypted file - bad magic string

rclone copy EncFS:data/media/tv/Shiki/banner.jpg . -vv

check if you can open it locally.

lets see what is its encrypted path/file name:

rclone cryptdecode EncFS: data/media/tv/Shiki/banner.jpg --reverse

It will produce {enc_path_name/enc_file_name}

now let's download it

rclone copy Backblaze:RCloneEncFS/{enc_path_name/enc_file_name}

hexdump -C -n 100 {enc_file_name}

These copies are probably side car files from chunker as you use hash_type = sha1all - if this is what you intended it is no problem. But also means you have to access backblaze remote using chunker. As any file bigger than 2Gi is split into parts. 1KB files contain metadata only.

🚀 sudo rclone --config=/opt/rclone/rclone.conf copy EncFS:data/media/tv/Shiki/banner.jpg . -vv
2023/07/07 09:31:44 DEBUG : rclone: Version "v1.63.0" starting with parameters ["rclone" "--config=/opt/rclone/rclone.conf" "copy" "EncFS:data/media/tv/Shiki/banner.jpg" "." "-vv"]
2023/07/07 09:31:44 DEBUG : Creating backend with remote "EncFS:data/media/tv/Shiki/banner.jpg"
2023/07/07 09:31:44 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2023/07/07 09:31:44 DEBUG : Creating backend with remote "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo/229.nmzzqD.vBs"
2023/07/07 09:31:44 DEBUG : fs cache: adding new entry for parent of "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo/229.nmzzqD.vBs", "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo"
2023/07/07 09:31:44 DEBUG : Creating backend with remote "."
2023/07/07 09:31:44 DEBUG : fs cache: renaming cache item "." to be canonical "/home/toasterdev"
2023/07/07 09:31:44 DEBUG : banner.jpg: Need to transfer - File not found at Destination
2023/07/07 09:31:45 ERROR : banner.jpg: Failed to copy: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 ERROR : Attempt 1/3 failed with 1 errors and: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 DEBUG : banner.jpg: Need to transfer - File not found at Destination
2023/07/07 09:31:45 ERROR : banner.jpg: Failed to copy: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 ERROR : Attempt 2/3 failed with 1 errors and: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 DEBUG : banner.jpg: Need to transfer - File not found at Destination
2023/07/07 09:31:45 ERROR : banner.jpg: Failed to copy: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 ERROR : Attempt 3/3 failed with 1 errors and: failed to open source object: not an encrypted file - bad magic string
2023/07/07 09:31:45 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         1.4s

2023/07/07 09:31:45 DEBUG : 9 go routines active
2023/07/07 09:31:45 Failed to copy: failed to open source object: not an encrypted file - bad magic string

So far no dice, the file seems to not have been copied at all.

🚀 ls banner.jpg
ls: cannot access 'banner.jpg': No such file or directory
🚀 sudo rclone --config=/opt/rclone/rclone.conf cryptdecode EncFS: data/media/tv/Shiki/banner.jpg --reverse
data/media/tv/Shiki/banner.jpg   154.pmFm/0.umlqi/234.KM/248.Ynoqo/229.nmzzqD.vBs
🚀 sudo rclone copy --config=/opt/rclone/rclone.conf Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo/229.nmzzqD.vBs .
🚀 hexdump -C -n 100 229.nmzzqD.vBs
00000000  7b 22 76 65 72 22 3a 31  2c 22 73 69 7a 65 22 3a  |{"ver":1,"size":|
00000010  32 32 36 34 31 2c 22 6e  63 68 75 6e 6b 73 22 3a  |22641,"nchunks":|
00000020  31 2c 22 73 68 61 31 22  3a 22 65 64 61 63 36 30  |1,"sha1":"edac60|
00000030  65 66 33 34 35 32 36 38  33 37 64 32 33 63 35 39  |ef34526837d23c59|
00000040  63 33 38 64 36 39 30 32  30 31 61 33 64 35 66 63  |c38d690201a3d5fc|
00000050  62 31 22 7d                                       |b1"}|
00000054

As you said it seems that the file is a sidecar file containing only the metadata, though fortunately it appears that it was copied successfully when using rclone copy instead of reading it out of the mount.

Your files are not encrypted but only their names are obfuscated - how you achieved it is hard to tell:)

EncFS works properly as your test.txt shows. So when now you upload new files using EncFS all is as it should be.

try new remotes:

[EncFS-test]
type = crypt
remote = Backblaze:RCloneEncFS
filename_encryption = obfuscate
directory_name_encryption = true
no_data_encryption = true
password = REDACTED
password2 = REDACTED

[Chunker-test]
type = chunker
remote = EncFS-test:
hash_type = sha1all
chunk_size = 2Gi

and mount Chunker-test

you should see your files.

If your data on Backblaze should be encrypted you will have to delete it and re-upload again. I suggest next time test before with small amount of data and if not sure throw question on the forum. You have some encryption/chunker mess at the moment.

You made some mistake here I guess. Not sure what was the purpose of sync and move instead of syncing directly from each remote to EncFS.

I am actually thinking that there is more wrongs here as:

is not JPG file but unencrypted chunker metadata file. Real banner.jpg is called banner.jpg.rclone_chunk.001

All points to the way how you use chunker... for me it looks like you chunked things after encryption instead of doing it before.

So you can also try:

[EncFS-test]
type = crypt
remote = Chunker-test:
filename_encryption = obfuscate
password = REDACTED
password2 = REDACTED

[Chunker-test]
type = chunker
remote = Backblaze:RCloneEncFS
hash_type = sha1all
chunk_size = 2Gi

and mount EncFS-test. If this works it means your data is encrypted but chunker metadata is not which is not tragedy. Still you should check carefully your data as if some is not chunked it will fail to be read too. If you use chunker you have to use it ALL the time. Or things wont work as you expect.

I'm pretty confused too to be honest. The way I attempted to copy things over to take advantage of OneDrive's checksum flag and Google Drive's fast-list was to just do the following commands one after the other (all in all it tooks about four days to copy everything over)

sudo rclone --config="/opt/rclone/rclone.conf" sync ToasterDEV-GDrive: Backblaze:/RCloneEncFS -vv --checksum --checkers=100 --transfers=100 --fast-list --multi-thread-streams=1024 --syslog --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

sudo rclone --config="/opt/rclone/rclone.conf" sync Lizita63-OneDrive:/Union Backblaze:/RCloneEncFS -vv --checksum --checkers=100 --transfers=100 --multi-thread-streams=1024 --syslog --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

sudo rclone --config="/opt/rclone/rclone.conf" sync ToasterDEV-OneDrive:/Union Backblaze:/RCloneEncFS -vv --checksum --checkers=100 --transfers=100 --multi-thread-streams=1024 --syslog --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

sudo rclone --config="/opt/rclone/rclone.conf" sync Flanch1942-OneDrive:/Union Backblaze:/RCloneEncFS -vv --checksum --checkers=100 --transfers=100 --multi-thread-streams=1024 --syslog --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

sudo rclone --config="/opt/rclone/rclone.conf" sync Union:/: Backblaze:/RCloneEncFS -vv --checksum --checkers=100 --transfers=100 --fast-list --multi-thread-streams=1024 --syslog --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

sudo rclone --config="/opt/rclone/rclone.conf" move Union:/ Backblaze:/RCloneEncFS -vv --checksum --checkers=100 --transfers=100 --fast-list --multi-thread-streams=1024 --syslog --rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

After everything copied over without reported errors, I attempted to mount Backblaze directly, and seeing that the filenames looked alright, I left it running without much issue until I noticed the problem that I described on the first post.

Chunker was indeed involved on the upload to the Union: remote, but not in any of the sync or move operations, though I'm not sure where I did go wrong when doing all of this.

I see. As you said this mount did indeed work. So all's not lost thankfully. Seriously, thanks for that.

If I were to do a move operation to a new bucket from the EncFS-test configuration, would I be able to have everything encrypted, ideally while removing the need for Chunker?

OK now I understand why move later - it is logical. But not all makes sense still:)

I think there is way to try to recover:

using:

[EncFS-test]
type = crypt
remote = Chunker-test:
filename_encryption = obfuscate
password = REDACTED
password2 = REDACTED

[Chunker-test]
type = chunker
remote = Backblaze:RCloneEncFS
hash_type = sha1all
chunk_size = 2Gi

[EncFS-NEW]
type = crypt
remote = Backblaze:RCloneEncFS-NEW   <<--- create RCloneEncFS-NEW bucket first
filename_encoding = base64
password = REDACTED
password2 = REDACTED

then

rclone copy EncFS-test: EncFS-NEW: --server-side-across-configs

test with --dry-run first

This will properly encrypt everything including filenames and gets rid of chunker.

later you can mount EncFS-NEW and test if all is ok (check as much data as possible to make sure things works - small files, large files).

If all OK you can delete Backblaze:RCloneEncFS

actually wait... All chunked data (files larger than 2GB) wont be copied server side - they will be downloaded and uploaded again. There is NO other option I am afraid. All files smaller than 2GB will be copied server side. I guess you pay for downloads so maybe better just upload everything again? But properly:)

I see. Fortunately downloads go through Cloudflare, so that isn't too much of a problem thankfully. If I'm understanding correctly, the command would end up being:

At first:
sudo rclone --config=/opt/rclone/rclone.conf copy EncFS-test: EncFS-NEW: --server-side-across-configs --dry-run

And if everything goes smoothly:
sudo rclone --config=/opt/rclone/rclone.conf copy EncFS-test: EncFS-NEW: --server-side-across-configs

Just to be sure, given that most files will probably be indeed over 2GB, is there any tweaking I might attempt to copy things faster? I'm running a 4 core Intel (i5-6600T) with 64GB of RAM, of which I can spare about 8GB just to this task, and my internet connection is 1Gbps Down/300Mbps Up.

I'm thinking perhaps altering the number of transfers and checkers, enabling --fast-list, plus enabling the --metadata and --checksum tags to verify, but I'm not sure if there's anything else I should take into account.

Why you use sudo? You can but there is no need.

Yes you can add all funky flags to improve performance:)

The key is to do --dry-run maybe capture output to logfile and have a look if all looks good.

Maybe also test first with some smaller folder in Backblaze:RCloneEncFS?

add maybe more --transfers 16 (?) and see how fast it goes. It wont be faster than 300Mbps - if slower increase transfers

Mostly because I had prepared a systemd script to run the mount as its own user and the config file is read-protected to the rclone user, aside from that there's no need as you said. If this goes smoothly, I'll try to figure out how to tangle it together again to remove the need for sudo.

That's probably a good idea, starting small and going from there.

For posterity's sake, this is the command I'm using to test first:

sudo rclone --config=/opt/rclone/rclone.conf copy EncFS-test:/data/media/tv/Shiki EncFS-NEW: --server-side-across-configs --dry-run --fast-list --multi-thread-streams=128 --checksum --checkers=16 --transfers=16 -vv --syslog--rc --rc-enable-metrics --rc-web-gui --rc-allow-origin --rc-web-gui-update --rc-web-gui-no-open-browser --rc-user=rclone --rc-pass=REDACTED --rc-addr=:5572 &

Thanks for all the help! I'll give it a try and report back.

I can have a look at log file if you want - just share it after --dry-run

Sure, thanks!

I'll just send it to a text file instead of syslog then, should save some debugging later.

1 Like

Okay, here it is.

Thanks for the help!

Looks good but:

2023/07/07 11:43:29 DEBUG : Creating backend with remote "Backblaze:RCloneEncFS/154.pmFm/0.umlqi/234.KM/248.Ynoqo"
2023/07/07 11:43:29 DEBUG : Couldn't decode error response: EOF
2023/07/07 11:43:29 DEBUG : Chunked 'Chunker-test:/154.pmFm/0.umlqi/234.KM/248.Ynoqo': invalid chunks in object "28.EGDszH.yqz"

as it is in Chunker-test maybe some old mess