Entry doesn't belong in directory "" (same as directory) - ignoring

What is the problem you are having with rclone?

ERROR : : Entry doesn't belong in directory "" (same as directory) - ignoring

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.1

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0-122-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.18.5
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using?

HPE Scality

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone lsf scality-PROVA1:bck.storage.cloud/linuxhosting3/

A log from the command with the -vv flag

2022/09/28 16:25:07 DEBUG : rclone: Version "v1.59.1" starting with parameters ["rclone" "lsf" "-vv" "--dump" "bodies" "scality-PROVA1:bck.storage.cloud/linuxhosting/"]
2022/09/28 16:25:07 DEBUG : Creating backend with remote "scality-PROVA1:bck.storage.cloud/linuxhosting/"
2022/09/28 16:25:07 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/09/28 16:25:07 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/09/28 16:25:07 DEBUG : Using v2 auth
2022/09/28 16:25:07 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/09/28 16:25:07 DEBUG : fs cache: renaming cache item "scality-PROVA1:bck.storage.cloud/linuxhosting/" to be canonical "scality-PROVA1:bck.storage.cloud/linuxhosting"
2022/09/28 16:25:07 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/09/28 16:25:07 DEBUG : HTTP REQUEST (req 0xc00098e400)
2022/09/28 16:25:07 DEBUG : GET /bck.storage.cloud?delimiter=%2F&max-keys=1000&prefix=linuxhosting3%2F HTTP/1.1
Host: bck1.storage.cloud
User-Agent: rclone/v1.59.1
Authorization: XXXX
Date: Wed, 28 Sep 2022 14:25:07 UTC
Accept-Encoding: gzip

2022/09/28 16:25:07 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/09/28 16:25:07 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/09/28 16:25:07 DEBUG : HTTP RESPONSE (req 0xc00098e400)
2022/09/28 16:25:07 DEBUG : HTTP/1.1 200 OK
Content-Length: 613
Cache-Control: no-cache
Content-Type: application/xml
Date: Wed, 28 Sep 2022 14:25:07 GMT
Server: RestServer/1.0

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>bck.storage.cloud</Name><Prefix>linuxhosting3/</Prefix><Marker></Marker><MaxKeys>1000</MaxKeys><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><Contents><Key>linuxhosting/</Key><LastModified>2014-04-21T22:48:20.000Z</LastModified><ETag>&quot;cfcd208495d565ef66e7dff9f98764da&quot;</ETag><Size>1</Size><Owner><ID>281A811B6FEFBE2E281A81000000004000000140</ID><DisplayName>display-PROVA1</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents></ListBucketResult>
2022/09/28 16:25:07 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/09/28 16:25:07 ERROR : : Entry doesn't belong in directory "" (same as directory) - ignoring
2022/09/28 16:25:07 DEBUG : 4 go routines active

Hi Sem,

Appreciate the minimal debug log with --dump bodies, thanks!

I suspect a path with a double // or /? like in this post:

Or a leading / like in this post

If not then we probably need to narrow in on the path/file causing the issue. Are you able to list with

rclone lsd scality-PROVA1:bck.storage.cloud/linuxhosting3/

Also please post the redacted output from

rclone config show scality-PROVA1:

Hi @Ole

I just executed the command you sent me:

rclone lsd scality-PROVA1:bck.storage.cloud/linuxhosting3/

The output is:

ERROR : : Entry doesn't belong in directory "" (same as directory) - ignoring

About the second command the output is:

[scality-PROVA1]
type = s3
provider = Other
access_key_id = PROVA1
secret_access_key = #######
region = other-v2-signature
endpoint = http://bck1.storage.cloud
acl = private

I want to let you know that actually rclone worked perfectly for a large ammount of data we already trasfered, so i think the configuration is all correct.

Another interesting output is the result of this command

rclone ls scality-PROVA1:bck.storage.cloud/

Output:

        1 linuxhosting/
        1 linuxhosting3/
   328876 linuxhosting3/test_140.tar
        1 winhosting/
        0 winhosting/backup_140.zip

So seems like there is not a path with double // or /? even not a path with a leading /
Soo strange

How about the trailing slashes on some of the entries, is this normal?
(I don't see them when I do lsd on my remotes e.g. SFTP, Drive, OneDrive)

I never tried a bucket based remote, so please excuse me if asking dumb questions.

1 Like

Absolutely no issue, you raised a very interesting question.
Actually by doing "ls" it should not write the folder only and then the folder+filename.

Let me explain it better, the command:

rclone ls scality-PROVA1:bck.storage.cloud/

Should output something like this:

        1 linuxhosting/
   328876 linuxhosting3/test_140.tar
        0 winhosting/backup_140.zip

What it actually does is that it even prints out the folder as a single entry.

        1 linuxhosting/
        1 linuxhosting3/
   328876 linuxhosting3/test_140.tar
        1 winhosting/
        0 winhosting/backup_140.zip

So @Ole you think that could be some software that messed up the bucket during the file upload? it could be

It is my understanding, that there is no well-defined standard to make directory markers in bucket based systems.

So I would hesitate to say something messed up - just say that it may have used a standard incompatible with rclone.

@asdffdsa @Animosity022 Do you know what rclone accepts as (third party) directory markers?

1 Like

This is likely a folder marker which is a (usually) zero sized file to mark a folder. Its the only way to make empty folders on s3, but the use isn't standardised and rclone doesn't create them.

Rclone is happy to skip empty directory markers, but since yours apparently have 1 byte in them rclone isn't ignoring them (as they might have actual data you want to keep in).

You can fix this by removing the directory markers, or replacing with 0 sized files.

1 Like

Thanks you @Ole and @ncw for your very useful help, i'll try out and i'll let you know!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.