I've been testing using BackBlaze and FreeNAS and have hit an issue. If I upload encrypted files to the root of the bucket I can list and decrypt the files, no issue. If I upload to a subfolder, I can list the folders/files using the "rclone ls stdremote:" command but when using the crypt remote I get nothing back. Using -vv shows this..
2019/10/17 11:56:57 DEBUG : 2002/u0hvke71gj84kd1tl5fn6gne88ts09m8q5cdmt3kmqctpgmiidu0: Skipping undecryptable file name: not a multiple of blocksize
So I know the setup on FreeNAS is OK. I know the passwords used in the rclone.config file are OK, as tested using the root folder. I am using v1.49.5 and have tested with v1.49.1 with the same result...
Ideally I want to be able to setup multiple upload jobs to the same bucket but into separate subfolders, using the same encryption password and salt.
Any help much appreciated!
That looks like the root of the crypt config should be b2:bucket/2002
can you post your config file with the keys/secrets/identifying info removed?
Can you also post the rclone commands you are using.
Config below... I've tried adding the path to the crypt bucket with no joy. Ideally I wouldn't have it there either.
Appreciate your help
type = b2
account = xx
key = xx
hard_delete = true
type = crypt
remote = b2:bucketxx
filename_encryption = standard
directory_name_encryption = true
password = xx
password2 = xx
I changed the .conf to have /2002 and it does work... oops!
But is it possible to upload to sub directories with different jobs? I'd rather not send all my data at once, with different schedules for each subdir.
Yes, what you can do is put the path on the end of the remote name, so
rclone copy /path/to/dir b2-crypt:subdir
Does that help?
Note that the name
subdir will be encrypted.
I think thats the issue, the sync job from FreeNAS is encrypting the subdir, so unless I include the path in the .conf file rclone doesn't know to unencrypt the subdir. When I run the rclone ls b2-crypt:/2002/ command I get no results.
Maybe the only way around this is a bucket per sync job.
I think you are mistaken in your assumption here.
A remote will use the same settings for all sub-folders when you access it. The path you specify only determines what the root of that remote is.
So as long as you have a separate folder-structure for encrypted files - and another for unencrypted files then there should be no conflict.
Or are you saying that FreeNAS is doing encryption on it's own? That might certainly make problems as rclone could get confused seeing unrecognizable names. Why not just use rclone's encryption instead if that is the case?
Hi...I hope I am wrong. The FreeNAS is using RClone Sync in the background so it shouldn't be an issue. Maybe if I expand slightly on my use case..
I have a basic folder hierarchy that I would like to sync to a single bucket. I could choose the "years" folder, but that will be a lot of data and will run for days. I'd like to break it down and have a sync job for each folder. I should be able to create the same structure in my bucket by using FreeNAS to create the destination subdirs. That bit works fine.
The test of the pudding is using rclone on my laptop to ensure I can unencrypt the files should I lose my FreeNAS. If I run a straight rclone ls to the bucket I get a list of the folders in readable text and encrypted filenames. What I can't seem to do is run rclone ls crypted: to see the file names, unless I include the subdir in the path within the .conf file.
Well as far as I know this should not happen. I mean, I use crypt with a sub-folder specified and I have never seen this fail. This is on Gdrive for me, but that shouldn't really matter.
But I think an important point to make there is that I believe you may have to specify the bucket outside of the config. If you use a folder in the path that should be fine, but I don't know if you can stick the bucket in there. I remember coming across a problem like this when configuring CGP (another bucket-based cloud-service). Can you test and see if this may be the cause of your problem?
Quick test : removed the bucket from .conf and get this message
2019/10/17 17:04:12 Failed to create file system for "b2-crypt:bucket": failed to make remote b2:"a1fqh471l7pn1d68v82cbnidno" to wrap: you must use bucket "bucket" with this application key
But you put the bucket in the rclone command right? (as part of the destination). Because you still have to do that. I just remember issues trying to put the bucket in the config instead of the command directly. You still have to reference it, and if you didn't that then I'd expect it to fail.
command used rclone ls -vv b2-crypt:bucket/photos/2002/
tried including and removing :bucket from the crypt remote in the .conf too.
Those two statements are related!
I would suggest that you create an encrypted folder hierarchy so the
2002 are encrypted then you'll just be able to use
rclone ls crypted: to see the folder hierarchy.
So you always want the crypted remote to be
remote = b2:bucket in all the configs. Then you add the path when you do the sync
rclone sync /path/to/2002 crypted:2002
I've decided to setup separate jobs for each folder, syncing to the same bucket. Difference being I've created an rclone remote for every folder/sync job.
Thank you both for trying to help.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.