Crypt strength comparison

Hi,

How much stronger/weaker is the rclone crypt function compared to:

  1. Cryptomator - https://cryptomator.org/

  2. CryFS - https://www.cryfs.org/

Just want to see if its worth even using the above if rclone crypt is at same grade.

I know they have other benefits like not disclosing directory structure or for cryfs file sizes, buts a separate topic.

Thanks

Without going into academic depth (which I am not qualified for anyway) rclone crypt is based on 256-bit AES go secretbox. In terms of raw cryptographic strength that is basically industry standard and impossible to bruteforce by current means. You could add even more bits but ... why? Does it matter if takes a million years to bruteforce or a hundred million? The "real strength" of encryption often comes down to if it has gone through thorough review, is open and can be inspected for flaws and weaknesses. If it has functions you need and is good about not leaking information. And that is a very hard thing to quantify as "better" or "worse"...

AES is a very common base for encryption, so I wouldn't be surprised at all if those two others use something similar underneath the hood.

You can read more about it here and probably google a lot of details from the name:

Personally I use rclone crypt. Do note that there are plans of further evelopmens relating to crypt that will give it benefits for rclone-use over other crypts. Notably - there may be support integrated metadata sometime soon - letting you hash-compare across crypted and uncrypted remotes and many other things. Things like this won't really be possible with a non-integrated format. Besides, rclone has really handy remotes to encrypt on the fly ect. that just fits very well into the rest of the ecosystem. An external solution will have to be configured on the side.

AFAIK, it is unavoidable to expose the folder structure due to the limitations of the cloud. These systems (and thus rclone itself) work on a file-level, not a disk or block level. It would be impractical to the extreme to try to make a unified container (like one big file) that would completely obscure the structure. You can still encrypt all the names (files and folders) though, so that's typically more than good enough. Also, filesizes, while their approximate size will be visible - will not be the exact size of the input-file, so that makes it pretty hard to correlate anything.

Thanks for response.

Just to confirm on above point.

  1. CryFS doesn't explose file size. All files are user defined size eg. 16kb.

  2. Both of above don't expose directory structure.

They definately have advantage in above 2 respects.

Maybe on rclone AES 256, there is no advantage - but I'm not certain yet.

Thanks

You could hide file-size if you wanted with the rclone "chunker" remote to split files to whatever size you want. Together with encryption it would be very hard to guess at sizes. Although you'd probably have to sacrifice your modtime attributes too if you wanted to really hide it that bad.

This does come with some unavoidable issues in performance though. Opening many files on a cloud is always going to be slower than opening a single one. This is usually the bigger concern. 16kb for example would be really really terribly bad performance-wise.

Maybe saying "unavoidable" was not quite correct.
But when you work at a file-level and want to hide the folder structure, you are going to have to put all that structure info as data inside of an encrypted file and then somehow reconstruct it back to be presented. That is doable - but I think you will lose a lot of the functionality that the server can do for you as the cost of doing this, like just listing a certain folder for example. you have to grab and process that data locally instead then, which will no doubt be slower. No fast-listing. No way to server-side copy a spesific (real) file. There's a lot of functions I just can't see being possible if you use this approach.

Also again - performance will be a problem. Very small chunks will be extremely slow due to concurrent access limitations on most backends. But if the chunks are very large then you are going to have to reupload a lot of data each time you make one small change.

But this is just my speculation. I fully admit I don't know enough about the technical details of these two crypt-solutions to fully evaluate them. Perhaps they have found some good ways to mitigate the worst issues - but I suspect there are still going to be some significant tradeoffs for using such a strategy.

rclone: stable, time proven, multi-os
cryfs: alpha software pretending to be beta software, no real support for windows

tried creating a new config for chunker, but couldn't see a chunker option in the 31 backends?

rclone v1.49.5

  • os/arch: linux/amd64
  • go version: go1.13.1

It was introduced with version 1.50.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.