Further to a post last year "Remove metadata leaks", just wanted to check if there's been any progress on the chunker to also merge files as per 2a/b below.
The result would be:
Hides metadata on file sizes by doing below.
If chunk size set to 5mb, then:
a) All files smaller than 5mb would be merged to create 5mb chunks
b) All files larger than 5mb would be split to create 5mb chunks.
If files can be chunked/merged across directories, that would help flatten directories, increasing privacy even more.
Of course, I would intend to feed the above through the encryption layer too.
This would really assist with privacy on storing to clouds.
i use rclone to backup files to the cloud.
if i have a folder with lots of small files, i zip them before upload.
the files inside the zip are encrypted with a password.
if you are going to use crypt then
- the files sizes are going to change.
- the filename are encrypted
- the folder names are encrypted.
"If files can be chunked/merged across directories"
how would expect rclone to handle a
rclone would have to
- maintain a database to convert flatten files to their correct paths.
- use metadata per file and then when doing a
ls that would use a very large number of transactions.
and then sync would have the same problem, converting flatten to full paths and vice versa
I agree there would be a compromise between security vs speed. If the user could make that decision, that would be perfect.
I use duplicati for incremental online backups, as its automatic and has the merge/chunk built in.
Rclone is more helpful to me where I need to sync with an online cloud. This cannot be done if I keep zipping up and uploading/downloading.
Once the data is in the cloud, I simply mount and run a sync app such as freefilesync which is quite amazing.
The beauty of rclone allowing multiple custom layers of crypt/chunk/cloud users happen in background works magically and the mount doesn't need to be aware of that.
In respect of online privacy, at present its failing in not hiding the metadata for file sizes and flat directory.
The 1st step I believe is merging alongside the chunker. This should be quite simple to upgrade.
The directory flattening is much more complex and although I'd love to see that at some point, its not what I'm hoping for in this request.
Just saw MASSIVE difference between large file upload vs small file upload speeds.
It seems if chunker is also able to combine small files, then the upload speed can increase 10-20 fold.
Whats the best was to request this feature?
This feature is too difficult for rclone to implement easily.
I suggest you use a different tool to pack up your small files before archiving them, for example you can use tar in a pipeline with rclone
tar czf - /path/to/directory | rclone rcat remote:file.tar.gz
thats a real pity.
using other tools would just defeat the transparent benefit of using rclone, where I can mount backends or sync/update individual files etc.
Although its too diff for rclone, I thought the chunker backend might be able to handle this easily as its already doing something similar in reverse when splitting files.
I agree it would be nice. We could possible do a
tar backend where you could read individual files out of an uncompressed
tar file and upload files to a
tar with something like
rclone tar dir remote:file.tar
But doing a general purpose backend which chunks files into tars transparently would be hard (but not impossible!)
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.