ERROR : : Entry doesn't belong in directory "" (same as directory)

What is the problem you are having with rclone?

I am the maintainer of https://r-universe.dev and I am trying to support a very basic S3 compatible API to let users easily mirror their files. It is read-only and there is no authentication needed, it is really just the bare minimal list and download.

I have implemented ListObjectsV2 for example: https://jeroen.r-universe.dev/?list-type=2

From trial and error I found that rcloud also needs: https://jeroen.r-universe.dev/?x-id=ListBuckets but I am not sure why this is and if I implemented it correctly. I have simply a single bucket per subdomain on the root path.

A simple ls and also lsl seems to work:

rclone ls :s3,provider=Other,list_version=2,endpoint=jeroen.r-universe.dev:

However if I try to rclone lsd or copy we get:

rclone lsd :s3,provider=Other,list_version=2,endpoint=jeroen.r-universe.dev:
# 2025/03/08 14:31:38 ERROR : : Entry doesn't belong in directory "" (same as directory) - ignoring

My API must be doing something wrong but I can't figure out what it is.

Run the command 'rclone version' and share the full output of the command.

rclone version
rclone v1.69.1
- os/version: darwin 15.3.1 (64 bit)
- os/kernel: 24.3.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.24.0
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

My own, I am trying to implement a minimal S3 compatible API.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone lsd :s3,provider=Cloudflare,endpoint=jeroen.r-universe.dev:

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[jeroen]
type = s3
provider = Other
list_version = 2
endpoint = jeroen.r-universe.dev

A log from the command that you were trying to run with the -vv flag

2025/03/08 14:35:26 DEBUG : rclone: Version "v1.69.1" starting with parameters ["rclone" "-vv" "lsd" ":s3,provider=Cloudflare,endpoint=jeroen.r-universe.dev:"]
2025/03/08 14:35:26 DEBUG : Creating backend with remote ":s3,provider=Cloudflare,endpoint=jeroen.r-universe.dev:"
2025/03/08 14:35:26 DEBUG : Using config file from "/Users/jeroen/.config/rclone/rclone.conf"
2025/03/08 14:35:26 DEBUG : :s3: detected overridden config - adding "{X99tj}" suffix to name
2025/03/08 14:35:26 DEBUG : Using anonymous credentials - did you mean to set env_auth=true?
2025/03/08 14:35:26 DEBUG : fs cache: renaming cache item ":s3,provider=Cloudflare,endpoint=jeroen.r-universe.dev:" to be canonical ":s3{X99tj}:"
2025/03/08 14:35:26 ERROR : : Entry doesn't belong in directory "" (same as directory) - ignoring
2025/03/08 14:35:26 DEBUG : 5 go routines active

welcome to the forum,

might try --s3-no-check-bucket


https://rclone.org/commands/rclone_serve_s3/


for a deeper look at the api calls, use--dump flags such as --dump=headers

For some reason it does work if I treat it as virtual-host-style bucket r-universe.dev:jeroen instead of the a path style bucket r-universe.dev:jeroen. So this gets me closer:

rclone copy :s3,provider=Other,list_version=2,force_path_style=false,endpoint=r-universe.dev:jeroen test

Now I run into another problem: for some files in my list, the size is unkown. The API_ListObjectsV2 spec says Size is not required.

However rclone complains when it tries to download these files. For example if we run:

rclone copy :s3,provider=Other,list_version=2,force_path_style=false,endpoint=r-universe.dev:jeroen/src/ test
# ERROR : contrib/PACKAGES.ac4748ef.partial: corrupted on transfer: sizes differ src(S3 bucket jeroen path src) 0 vs dst(Local file system at /Users/jeroen/workspace/r-universe/express-frontend/test) 5259
# ERROR : contrib/PACKAGES.gz.ac4748ef.partial: corrupted on transfer: sizes differ src(S3 bucket jeroen path src) 0 vs dst(Local file system at /Users/jeroen/workspace/r-universe/express-frontend/test) 1990

Is there something I can do on the server or client side to support listing files with unknown size?

maybe try --ignore-size

OK thank you, so this long style command now works as you can confirm:

rclone copy --ignore-size :s3,provider=Other,list_version=2,force_path_style=false,endpoint=r-universe.dev:jeroen ./jeroen

However I can't make it work in a config file. Using exactly these values:

[jeroen]
type = s3
provider = Other
list_version=2
force_path_style=false
endpoint = r-universe.dev
ignore_size = 1

However the force_path_style and ignore_size options do not seem to do anything now?

How do I specify the equivalent of command line virtual-host endpoint=r-universe.dev:jeroen in the config?

global flags cannot go into the config file.


[s3]
type = s3
provider = Other
list_version=2
force_path_style=false
endpoint = r-universe.dev

[jeroen]
type = alias
remote = s3:jeroen 

and then use it
rclone lsd jeroen:

This has to do with how prefixes are displayed.
I suggest creating a bucket with the same objects in an S3-compatible service, listing the objects, and then comparing it to what is displayed in your implementation.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.