Rclone ls encrypted:subdir does not list files in subdir

What is the problem you are having with rclone?

My rclone version is 1.72.1 (non-beta). I have a generic s3 backend remote configured called “unencrypted”. It uses the default bucket and appears to work well; i.e. rclone ls unencrypted: and rclone ls unencrypted:default output the expected filenames and bucket-name/subdirectory structure (see below).

Running on top of that remote, I have configured a crypt remote called “encrypted”. It points to a subdirectory of the s3 root “default” bucket called “rcrypt”. I successfully copied some testfiles and subdirectories/testfiles into the crypt.

I can use rclone ls encrypted: and all the files and the directory structure output as expected, nested in their subdirectories [see output below]. However, when I rclone ls encrypted:test2 (for example), rclone ls returns no files; just a blank line. Note that the subirectory test2 is non-empty; it contains testfile.txt.

Debug output with -vv shows that the subdirectory seems to be mapped correctly, but the subdirectory/filename seems to be filtered out?

Command output

With the subdirectory specified, rclone ls outputs only a blank line. But I would expect it to output “18770 test2/testfile.txt” or “18770 testfile.txt”. I’ve rtfm, and this behavior appears to be incorrect. Am I wrong?

rclone ls encrypted:test2

vs.

rclone ls encrypted:
    18770 test2/testfile.txt
    18770 test/testfile.txt
    18770 testfile.txt

vs.

rclone ls unencrypted:
    18818 default/rcrypt/1qvcr5qv9m848sccv12hheasq8/c5uvdig5jduva26dp7kor0r8v0
    18818 default/rcrypt/1brcrj3hpmk0roddahrlhcrt6o/c5uvdig5jduva26dp7kor0r8v0
    18818 default/rcrypt/c5uvdig5jduva26dp7kor0r8v0
1073741824 default/testingthecli/3testfile.dat
       36 default/testingthecli/littletext.txt
1073741824 default/3testfile2.dat
       36 default/littletext.txt 

Version command output

rclone version --check
yours:  1.72.1
latest: 1.72.1                                   (released 2025-12-10)
beta:   1.73.0-beta.9391.9ec918f13               (released 2026-01-14)
  upgrade: https://beta.rclone.org/v1.73.0-beta.9391.9ec918f13

When running the command with the -vv flag

rclone -vv ls encrypted:test2
2026/01/15 09:36:04 DEBUG : rclone: Version "v1.72.1" starting with parameters ["rclone" "-vv" "ls" "encrypted:test2"]
2026/01/15 09:36:04 DEBUG : Creating backend with remote "encrypted:test2"
2026/01/15 09:36:04 DEBUG : Using config file from "/home/username/.config/rclone/rclone.conf"
2026/01/15 09:36:04 DEBUG : Creating backend with remote "unencrypted:default/rcrypt/1qvcr5qv9m848sccv12hheasq8"
2026/01/15 09:36:05 DEBUG : fs cache: renaming child cache item "unencrypted:default/rcrypt/1qvcr5qv9m848sccv12hheasq8" to be canonical for parent "unencrypted:default/rcrypt"
2026/01/15 09:36:05 DEBUG : fs cache: renaming child cache item "encrypted:test2" to be canonical for parent "encrypted:"
2026/01/15 09:36:05 DEBUG : test: Excluded
2026/01/15 09:36:05 DEBUG : test2: Excluded
2026/01/15 09:36:05 DEBUG : default: Excluded
2026/01/15 09:36:05 DEBUG : rcrypt: Excluded
2026/01/15 09:36:05 DEBUG : testfile.txt: Excluded (FilesFrom Filter)
2026/01/15 09:36:05 DEBUG : testfile.txt: Excluded
2026/01/15 09:36:05 DEBUG : 6 go routines active

(edited for clarity of presentation)

please post the output of rclone config redacted


fwiw, create a new bucket for the crypted files.

rclone config redacted
[unencrypted]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
endpoint = http://127.0.0.1:8089

[encrypted]
type = crypt
remote = unencrypted:default/rcrypt/
password = XXX

### Double check the config for sensitive info before posting publicly

Note that I am using a locally hosted S3 endpoint to communicate with the cloud provider. It is a python implementation of the S3 protocol.

Note 2: I corrected the provider type to Other. It had read Ceph. I observed no difference in the rclone ls encrypted: behavior, even if I delete and then recreate the remotes, buckets, subdirectories and files.

If I am correct about how rclone should work, it may be a bug in the program’s implementation of the S3 protocol.

If you use a bucket-based storage system,
it is generally advisable to wrap the crypt remote around a specific bucket

I tried recreating the remote and crypted remote; no observable difference.

Following up on your comment about wrapping the crypt around its own s3 bucket: I tried creating a bucket via rclone mkdir unencrypted:testrcrypt, but I still had to configure the crypted remote to “unencrypted:default/testrcrypt” to get it to work. The error when the “default/” is omitted is “NOTICE: Failed to ls: directory not found”.

I retested this command by recreating the remotes and file structure using another s3 cloud provider (filelu’s S5). rclone ls filelu-s3-encrypted:test2 worked as expected, returning a listing of one file. To narrow this down further, I changed the “provider” key to Other, and then Ceph. In all three cases the rclone ls filelu-s3-encrypted:test2 command worked as expected.

I now suspect the problem is most likely in the python implementation of the s3 protocol I am using. It may be incomplete, or unable to create/work with multiple s3 buckets.

Thanks for your help.

Ultimately I able to confirm and fix this issue. Documenting this for future reference, as it was not due to the lack of support in the S3 client script for creating another S3 bucket specifically to use with rclone crypt, as suggested above.

Instead it turned out to be a poor S3 protocol implementation in the python S3-compatible endpoint provider client script. Essentially, the implementation in the script did not properly return a 404 code for a HEAD request to a subdirectory (i.e. what distinguishes it from a file). With that bug corrected, the script was able to resolve the subdirectory to be an object prefix and proceed with requests for the filenames therein.

Moreover, this bug caused the rclone ls encrypted:subdirectory command pattern to fail, but only appeared with a crypt remote precisely because rclone crypt remotes rely on stricter S3 protocol requirements in order to unencrypt the names of subdirectories. (Note that this implies the problem may be restricted to only those cases where the crypt remote specifies encrypting directory names. I did not test this hypothesis, as encrypting directory names was my preferred use case.)

A similar bug may affect other python S3 endpoint client scripts.

1 Like

I’m a bit unclear about this, so I apologize for asking: "Instead, it turned out to be a subpar S3 protocol implementation in the Python S3-compatible endpoint provider client script."

This isn’t part of rclone’s code, is it? It refers to the code on "the other side" that rclone was attempting to communicate with, suggesting that rclone’s S3 protocol is functioning correctly?

Correct. This turned out not to be an issue with the rclone codebase itself.

It is an issue within the s3 protocol provider script that runs a local s3 implementation on an http server (cheroot). When rclone communicates with the local server to get a listing of a specific subdirectory, it fails to behave as the s3 protocol specifies and that causes rclone to appear to fail.

This is a generic python script with many variations on github etc, and I am still working to get the fix into the upstream codebase. It appears to be used by multiple cloud storage providers; I’m not sure how many of them are affected.

But it is important to document here so others might be pointed in the correct direction if rclone (or its GUIs or other dependent apps) appears to fail for them. For example, imagine that a rclone GUI reports a directory being empty when the user changes into it, but it is not... is the problem in GUI, in rclone, or in the local S3 server?

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.