Single listing, decrypt specific folders - union/combine filtering

I've got an upstream (let's call it s3_upstream) which contains mostly unencrypted content, but also has some specific folders with encrypted content only. I would like to have a single view/listing/remote which can display unencrypted data as it is and decrypt names inside these encrypted specific folders.

Given structure:

file.txt
folder/file.txt
encryptedFolder/eFopbFZRBJlFyk7NFHuAQNOKDJhBwow_u_qL6FCG_3g
encryptedFolder2/HHjKeeHy6S-9BJuxpNeKPHqRae6mGHMkCthPsjFZtUk

I would like to end up with:

file.txt
folder/file.txt
encryptedFolder/Loga_Guzowianki.zip
encryptedFolder2/premiumizeme.svg

Please note that: encryptedFolder and encryptedFolder2 are both plaintext names of a folder/location where data and filepaths are encrypted.
eFopbFZRBJlFyk7NFHuAQNOKDJhBwow_u_qL6FCG_3g and HHjKeeHy6S-9BJuxpNeKPHqRae6mGHMkCthPsjFZtUk are encrypted filepaths (base64 encoded).

I think I've got somewhere with a combination of union and combine:

[s3_upstream]
type = s3
endpoint = ...
region = us-east-1
secret_access_key = ...
access_key_id = ...
no_check_bucket = true
provider = Other

[s3_with_bucket]
type = alias
remote = s3_upstream:bucket

[encrypted]
type = crypt
filename_encoding = base64
password = ...
remote = s3_with_bucket:encrypted

[encrypted2]
type = crypt
filename_encoding = base64
password = ...
remote = s3_with_bucket:encrypted2

[vaults]
type = combine
upstreams = "encrypted=encrypted:" "encrypted2=encrypted2:" 

[union_vaults]
type = union
upstreams = s3_with_bucket: vaults:

the issue is that with rclone ls union_vaults: I get:

file.txt
folder/file.txt
encryptedFolder/Loga_MercuryHSF.zip
encryptedFolder2/premiumizeme.svg
encryptedFolder/eFopbFZRBJlFyk7NFHuAQNOKDJhBwow_u_qL6FCG_3g
encryptedFolder2/HHjKeeHy6S-9BJuxpNeKPHqRae6mGHMkCthPsjFZtUk

It seems that contents of encryptedFolder and encryptedFolder2 is correctly decrypted, unfortunately unencrypted version is also included.

I need to filter out encryptedFolder/* and encryptedFolder2/* coming from the s3_with_bucket as these locations are decrypted and handled by the vaults: which combines encrypted: and encrypted2:

As far I am concerned it is not possible to define filters on the remote level inside rclone.conf, which would come really handy.

Is there any other approach that can be used to provide such unified view?

===
Using latest Rclone:

rclone v1.70.3
- os/version: ubuntu 25.04 (64 bit)
- os/kernel: 6.14.0-24-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.4
- go/linking: static
- go/tags: none

might try using an alias remotes, that include the folder path, in addition to the bucket.
and a combine remote.

remote = s3_upstream:bucket/folder

Thanks!

I need to display root listing as it is, unencrypted, without transformations, but since root listing contains encrypted locations (sometimes called vaults), I don't want to show encrypted locations, paths but instead replace them with encrypted ones.

Solution with alias that you suggested points to specific folder, but in fact it seems that I need to point to root location, unless I've misunderstood solution that you've suggested.

It's also possible to narrow down specific folder inside combine, upstreams = "listingFolder=s3_upstream:bucket/folder, which is similar to alias approach, but I don't think this solves anything.

there are more options.

  • try filters such as --exclude
  • create S3 IAM users and set S3 bucket policies and permissions. then create a remote for each user and one combine remote.
  • re-organize the directory structure to avoid this issue.
  • use multiple buckets, for crypted and for non-crypted files.

Yes these are the options, but it seems that none of these options allow displaying data from one location (e.g. A) and overlaying specific encrypted locations on desired path. This is just an UI requirement. Think of Dropbox with some folders being actually pointers to crypt in such way that user isn't really aware of it.
Displaying data as two folders e.g. all and vault (this is what's currently possible with combine) isn't really UI friendly.

Theoretically it would've been possible to use --exclude if the vault (in other words encrypted folder) name is encrypted itself, in such case it would've been possible to exclude it at the root level, but this approach isn't compatible with some other providers that use vault approach, e.g. Koofr

It sounds I may need to experiment with setting filters on the remote level (hopefully with not too much hacking), unless there is some other obvious solution that I am missing.

The alternative would be to add support for combine where it can display data from desired remote not in the sub-folder (default) but in the root directory, then next upstreams would overlay on top of it, e.g. rule could look like: /=remote:path encrypted=remote2:path

Thanks for your help so far.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.