@ncw Is there a way to read crypt drive without rclone.
eg If i mount the acd drive with some other tool is there a way to get to unencrypted files
HOW TO UPDATE: 1. Make new "Encrypt/Decrypt remote" (acdcrypt)
rclone config
n) New remote
Storage> 5
Remote to encrypt/decrypt. myremote:path/
etc ... ( Crypt )
2. Copy encfs unencrypted files to your new remote /usr/bin/rclone copy /path/encfs-unencrypted/ acdcrypt: --no-traverse --transfers=5 --checkers=5 --log-file=/var/log/acd2crypt.log
3. Mount your new crypt drive
rclone mount --allow-non-empty --allow-other acdcrypt: /path/acdcrypt/ &
4. Change unionfs settings to point to your new crypt drive /usr/bin/unionfs-fuse -o cow, -o allow_other /path/upload=RW:/path/acdcrypt=RO /path/unionfs
Before switching libraries I suggest you run once rclone sync and check to make sure files are ok.
Is there a way to mount crypt without rclone.
eg If i mount acd drive without rclone how can I get unencrypted files.
My main concern what if at one point rclone is abandoned or god forbid ncw looses API access to Amazon Drive etc… I would like to be sure my data would still be accessible.
WIth encfs I can easily use acd_cli or any other tools and will still be able to mount unencrypted with crypt Iam not sure as i dont really know it.
Iam testing crypt drive now and i noticed that editing files directly on crypt mount does not work.
2017/01/04 14:05:31 .ftest.txt.swp: WriteFileHandle.Flush
2017/01/04 14:05:31 .ftest.txt.swp: Error detected after finished upload - waiting to see if object was uploaded correctly: HTTP code 429: "429 Too Many Requests": response body: "{"logref":"789bd6f6-d27e-11e6-ae3d-a3ca32a1ca32","message":"Concurrent Access on same node. Please back off.","code":""}" ("429 Too Many Requests")
2017/01/04 14:05:31 .ftest.txt.swp: Object not found - waiting (1/1)
2017/01/04 14:05:36 .ftest.txt.swp: Giving up waiting for object - returning original error: HTTP code 429: "429 Too Many Requests": response body: "{"logref":"789bd6f6-d27e-11e6-ae3d-a3ca32a1ca32","message":"Concurrent Access on same node. Please back off.","code":""}" ("429 Too Many Requests")
2017/01/04 14:05:36 pacer: Rate limited, sleeping for 389.300587ms (1 consecutive low level retries)
2017/01/04 14:05:36 pacer: low level retry 1/1 (error HTTP code 429: "429 Too Many Requests": response body: "{"logref":"789bd6f6-d27e-11e6-ae3d-a3ca32a1ca32","message":"Concurrent Access on same node. Please back off.","code":""}")
2017/01/04 14:05:36 .ftest.txt.swp: WriteFileHandle.Flush error: HTTP code 429: "429 Too Many Requests": response body: "{"logref":"789bd6f6-d27e-11e6-ae3d-a3ca32a1ca32","message":"Concurrent Access on same node. Please back off.","code":""}"
2017/01/04 14:05:36 .ftest.txt.swp: WriteFileHandle.Release nothing to do
I open the file and changed few lines but on save i receive error:
Thanks Nick. This is a huge concern for me as well now that I have everything uploaded to gdrive and acd using crypt. Can you give me an example of the syntax needed to accomplish this locally using a mock scenario? Trying to figure out how to do that to test such a (hopefully) unlikely scenario.
If you aren’t providing a howto on switching from encfs to crypt, maybe you shouldn’t title the thread “HOW TO: Switch from encfs to rclone crypt”.
If you are asking a question on how to do something, there a special key for that by the right shift key that looks like this: ?
What you do is configure a crypt pointing at a local directory, and copy your encrypted files into there with the same directory structure.
Something like this
$ rclone config
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/s/q> n
name> localcrypt
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / Yandex Disk
\ "yandex"
Storage> 5
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
remote> /tmp/encrypted_files_go_here
How to encrypt the filenames.
Choose a number from below, or type in your own value
1 / Don't encrypt the file names. Adds a ".bin" extension only.
\ "off"
2 / Encrypt the filenames see the docs for the details.
\ "standard"
filename_encryption> 2
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
Remote config
--------------------
[localcrypt]
remote = /tmp/encrypted_files_go_here
filename_encryption = standard
password = *** ENCRYPTED ***
password2 = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
e) Edit existing remote
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/s/q> q
$ mkdir /tmp/encrypted_files_go_here
$ rclone ls localcrypt:
Yea just found it weird as it did not allow me to edit simple less then 1k file.
Maybe if rcone cashed changes locally and push them on file close. ( the weird part is direct write, file change, move etc works with acd_cli mount )
p.s. around 10 days to go to re encrypt all my lib, atm getting average speed of 50MB using /usr/bin/rclone copy /storage/acd/ acdcrypt: -c --no-traverse --no-update-modtime --transfers=30 --checkers=30 --min-age 180m --log-file=/var/log/acd2crypt.log Hopfully wont get account locked by amazon as then i will need to change it to --transfers=2 and maybe even limit bandwith. ( Got quite few locks when I was making backup copy to my gdrive )