Empty Directories Remain

What is the problem you are having with rclone?

After moving and organizing directories on my source and using rclone sync to synchronize data on my S3 drive, old empty directories remain. I have also used rmdirs command after sync and while the -vv log file lists all directories the command removed, not a single one is actually deleted. Each time, this command "deletes" the same directories without error, but the directories remain.

What is your rclone version (output from rclone version)

rclone v1.56.0

  • os/version: Microsoft Windows 10 Home 2009 (64 bit)
  • os/kernel: 10.0.19042.1165 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.16.5
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

IDrive S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

.\rclone.exe rmdirs chrisidrive:chrisidrive/ -P -vv

The rclone config contents with secrets removed.

[chrisidrive]
type = s3
provider = Other
access_key_id =
secret_access_key =
region = us-east-1
endpoint = s3.us-east-1.idrivecloud.io
location_constraint = us-east-1

A log from the command with the -vv flag

PS C:\Program Files\Rclone> .\rclone.exe rmdirs chrisidrive:chrisidrive/ -P -vv
2021/08/28 18:55:58 DEBUG : rclone: Version "v1.56.0" starting with parameters ["C:\\Program Files\\Rclone\\rclone.exe" "rmdirs" "chrisidrive:chrisidrive/" "-P" "-vv"]
2021/08/28 18:55:58 DEBUG : Creating backend with remote "chrisidrive:chrisidrive/"
2021/08/28 18:55:58 DEBUG : Using config file from "C:\\Program Files\\Rclone\\rclone.conf"
2021/08/28 18:55:58 DEBUG : fs cache: renaming cache item "chrisidrive:chrisidrive/" to be canonical "chrisidrive:chrisidrive"
...
2021-08-28 18:58:06 DEBUG : Documents/Blog/Welcome To Haiti: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Spam: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Financial: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Blog 5 - Hands Together/Not Used: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Blog 5 - Hands Together: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Blog 4: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Blog 3: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Blog 2: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog/Art: Removing directory
2021-08-28 18:58:06 DEBUG : Documents/Blog: Removing directory
Transferred:              0 / 0 Byte, -, 0 Byte/s, ETA -
Deleted:                0 (files), 306 (dirs)
Elapsed time:       2m8.6s
2021/08/28 18:58:06 DEBUG : 18 go routines active

Hi Christopher,

Good post!

Let’s see if we can narrow down the issue to a single/few folders (and rmdir instead of rmdirs).

What do you see if you perform these commands:
(Caution: please make sure the commands do not delete something they shouldn't)

rclone lsd chrisidrive:chrisidrive/Documents/Blog
rclone lsf chrisidrive:chrisidrive/Documents/Blog/Spam
rclone rmdirs chrisidrive:chrisidrive/Documents/Blog
rclone rmdir chrisidrive:chrisidrive/Documents/Blog/Spam
rclone lsf chrisidrive:chrisidrive/Documents/Blog/Spam
rclone lsd chrisidrive:chrisidrive/Documents/Blog

I guess you can follow my approach, feel free to make it more specific if you can

@Ole thanks for your quick reply. Below is the output from running those commands. Neither rmdirs or rmdir had any affect. Both did not remove any directories.

PS C:\Program Files\Rclone> .\rclone lsd chrisidrive:chrisidrive/Documents/Blog
           0 2021-08-29 12:05:58        -1 Art
           0 2021-08-29 12:05:58        -1 Blog 2
           0 2021-08-29 12:05:58        -1 Blog 3
           0 2021-08-29 12:05:58        -1 Blog 4
           0 2021-08-29 12:05:58        -1 Blog 5 - Hands Together
           0 2021-08-29 12:05:58        -1 Financial
           0 2021-08-29 12:05:58        -1 Spam
           0 2021-08-29 12:05:58        -1 Welcome To Haiti
PS C:\Program Files\Rclone> .\rclone lsf chrisidrive:chrisidrive/Documents/Blog/Spam
PS C:\Program Files\Rclone> .\rclone rmdirs chrisidrive:chrisidrive/Documents/Blog
PS C:\Program Files\Rclone> .\rclone rmdir chrisidrive:chrisidrive/Documents/Blog/Spam
PS C:\Program Files\Rclone> .\rclone lsf chrisidrive:chrisidrive/Documents/Blog/Spam
PS C:\Program Files\Rclone> .\rclone lsd chrisidrive:chrisidrive/Documents/Blog
           0 2021-08-29 12:07:17        -1 Art
           0 2021-08-29 12:07:17        -1 Blog 2
           0 2021-08-29 12:07:17        -1 Blog 3
           0 2021-08-29 12:07:17        -1 Blog 4
           0 2021-08-29 12:07:17        -1 Blog 5 - Hands Together
           0 2021-08-29 12:07:17        -1 Financial
           0 2021-08-29 12:07:17        -1 Spam
           0 2021-08-29 12:07:17        -1 Welcome To Haiti

I can successfully remove directories with APIs like Cyberduck or the web interface for the account. Using rclone mount Windows explorer GUI or rclone command line, I cannot remove these directories. With the GUI, the directories remove (from the Windows Explorer GUI) for a few minutes and then reappear a few minutes later.

Thanks for testing and the additional very valuable input; it sure looks like a bug in either rclone, IDrive S3 or the combination.

Before collecting additional debug information (about the API calls), I would like to check if this can be reproduced on other S3 servers too.

@asdffdsa Are you able to quickly test/verify the correct functioning of rclone rmdir (and rmdirs) on another S3 server (E.g. Amazon and/or Wasabi)?

note, it is hard to be humorous in a post, but i am going to try...

i would like to quickly test but kinda of catch-33.

one the one hand, rclone rmdirs will only remove an empty set of dirs.
on the second hand, rclone mkdir, on s3, will not create an empty dir.
on my third hand and NO, that is not what you are thinking is :wink: i am using a famous sci-fi reference and a free virtual :beer: for whomever can tell me the source of that reference.

so, as i was writing, before i was rudely self-interrupted, it is possible to have empty dir on s3.
so using my third hand, i created an empty dir.

i tried and failed to delete the empty dir
on the one hand, rclone rmdirs
on the second hand, rclone delete --rmdirs
on the third hand, rclone purge

here is the debug log, lightly edited

rclone lsd -R wasabi01:testfolder01 
           0 2021-08-29 18:25:30        -1 01

rclone ls wasabi01:testfolder01 

rclone rmdirs wasabi01:testfolder01 --retries=1 -vv 
DEBUG : rclone: Version "v1.56.0" starting with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "rmdirs" "wasabi01:testfolder01" "--retries=1" "-vv"]
DEBUG : 01: Removing directory
DEBUG : S3 bucket testfolder01: Removing directory
ERROR : : Failed to rmdir: BucketNotEmpty: The bucket you tried to delete is not empty
	status code: 409, request id: 53F73D404C247EE9, host id: cpY74hhHD/OWOUHMIik78UM9N7conGTIFX0im2MopJ8UZJfEIM0HUd8A2sXBOEuAaJ4u09iwSKtr

rclone delete --rmdirs wasabi01:testfolder01 --retries=1 -vv 
DEBUG : rclone: Version "v1.56.0" starting with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "delete" "--rmdirs" "wasabi01:testfolder01" "--retries=1" "-vv"]
DEBUG : Waiting for deletions to finish
DEBUG : 01: Removing directory

rclone purge wasabi01:testfolder01 --retries=1 -vv 
DEBUG : rclone: Version "v1.56.0" starting with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "purge" "wasabi01:testfolder01" "--retries=1" "-vv"]
DEBUG : Waiting for deletions to finish
DEBUG : 01: Removing directory
DEBUG : S3 bucket testfolder01: Removing directory
ERROR : : Failed to rmdir: BucketNotEmpty: The bucket you tried to delete is not empty
	status code: 409, request id: FD7B47416F7D9035, host id: q9/VvfvG0e6ZQHNGgA2jW63MqNxcg8fnSnXS4AtG3wNInknw8+dN0IoWrL6e7A1U0aCw4Er+jP6R

rclone lsd -R wasabi01:testfolder01 
           0 2021-08-29 18:25:32        -1 01

i had a long conversation with ncw, that rclone should allow the creation of an empty dir, but as you can see from this log, rclone can sometimes be cruel with it output.

rclone mkdir wasabi01:testfolder01/pretty.please.with.sugar.on.top.allow.me.to.create.an.empty.folder -vv 
DEBUG : rclone: Version "v1.56.0" starting with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "mkdir" "wasabi01:testfolder01/pretty.please.with.sugar.on.top.allow.me.to.create.an.empty.folder" "-vv"]
NOTICE: S3 bucket testfolder01 path pretty.please.with.sugar.on.top.allow.me.to.create.an.empty.folder: Warning: running mkdir on a remote which can't have empty directories does nothing
DEBUG : S3 bucket testfolder01 path pretty.please.with.sugar.on.top.allow.me.to.create.an.empty.folder: Making directory
INFO  : S3 bucket testfolder01 path pretty.please.with.sugar.on.top.allow.me.to.create.an.empty.folder: Bucket "testfolder01" created with ACL "private"

rclone lsd wasabi01:testfolder01 

Thanks a lot, very good explanation, examples, and humour :joy:

The search I should have done before asking shows this as a long time rclone (discussion) issue:

s3: Support empty "directory" keys #753
s3: create zero length files to mark directories #2505
Bucket Based Remotes: empty directories #3453
s3: Make rclone purge delete directory markers #4779

Let’s hope it get fixed soon.

There is just one thing doesn’t add up for me:

@asdffdsa has errors when trying to delete higher level folders. That is deletion of testfolder01 results in an ERROR because it isn’t empty - it still contains folder 01.

@wanderer has no errors when trying to delete higher level folders. I would expect an ERROR when trying to delete Documents/Blog if Documents/Blog/Spam wasn’t deleted.

What am I missing?

  • in the OP example of rclone rmdirs, he/she/it was trying to delete an empty dir that was a subdir of another dir.
  • in my example, testfolder01 is a bucket, not a dir, perhaps i should have called it testbucket01.
    that 409 is from wasabi. in s3, a bucket is different from an folder.
2021/08/30 09:57:06 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/08/30 09:57:06 DEBUG : 01: Removing directory
2021/08/30 09:57:06 DEBUG : S3 bucket testfolder01: Removing directory
2021/08/30 09:57:06 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/08/30 09:57:06 DEBUG : HTTP REQUEST (req 0xc00072d300)
2021/08/30 09:57:06 DEBUG : DELETE /testfolder01 HTTP/1.1
Host: s3.us-east-2.wasabisys.com
User-Agent: rclone/v1.56.0
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210830T135706Z
Accept-Encoding: gzip

2021/08/30 09:57:06 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/08/30 09:57:06 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/08/30 09:57:06 DEBUG : HTTP RESPONSE (req 0xc00072d300)
2021/08/30 09:57:06 DEBUG : HTTP/1.1 409 Conflict
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Mon, 30 Aug 2021 13:57:07 GMT
Server: WasabiS3/7.0.176-2021-07-18-7900366 (head2)
X-Amz-Id-2: 8WifyplWAnRenXrYotjRcE+Mo2GiXnD4SM2yfITIJ4qAx+7LfkMkKxMHodQP99ypUqCSQKATB4T7
X-Amz-Request-Id: 630EEF8E78DEAC80

138
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>BucketNotEmpty</Code><Message>The bucket you tried to delete is not empty</Message><BucketName>testfolder01</BucketName><RequestId>630EEF8E78DEAC80</RequestId><HostId>8WifyplWAnRenXrYotjRcE+Mo2GiXnD4SM2yfITIJ4qAx+7LfkMkKxMHodQP99ypUqCSQKATB4T7</HostId></Error>

as to what gets fixed, do you mean that rclone should support an empty dir on s3 or that rclone's behavior should be self-consistent?

  • rclone will not create a empty dir.
  • rclone can see an empty dir.
  • rclone will claim to delete an empty dir but does not delete it.

on a rclone mount, even stranger behavior.

  • rclone will appear create an empty dir but in the local dircache, not in the backend. only if you copy a real file to that virtual dir, will rclone create both the dir and file in the backend.
  • rclone will appear to delete an empty dir created by another app but after --dir-cache-time expires, that deleted empty dir will reappear. rclone deletes that empty dir from the local dircache not from the backend and when rclone refreshes the dircache, the dir reappears

s3 does not support directories, instead emulates it in its web interface.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html
i would guess that since s3 api does not support that, rclone does not support it.
as to what the aws s3 command line app does with an empty dir, i have no idea.

given that all the other third-party apps i have used support the concept of empty dir, i believe that rclone should also support that.

if you look at my rclone rmdirs example and the OP, they are the same in that rclone claims to delete an empty dir, does not delete the empty dir.
and that is the same behavior with rclone mount.
perhaps a new topic should be started using feature request to discuss that....

@asdffdsa & @Ole I appreciate all of the time you have taken to look into the issue of removing empty directories. I think rclone is great software - it makes using S3 storage much more user-friendly and I like both the command line usage for copy and sync and the GUI for working on & viewing files. It is the primary method I use to connect with my S3 storage on my computer.

Of course, I'd love rclone to clean up and remove empty directories on S3 to mirror another data set on a sync and secondarily support empty directories. In my case, there is a workaround of using the S3's web interface or another API to remove directories. Should I post a new topic regarding a feature request to put this in the hopper for newer versions of rclone or is this thread sufficient?

@asdffdsa Thanks a lot for the additional explanations!

I am obviously a beginner with the mapping/emulation of a file system on top of a (bucket based) object storage. I naively assumed there were a relatively well-defined (de-facto) standard. That is the impression you get when reading your link to amazon and seeing that @wanderer can work around the issue using Cyberduck and the web interface.

I fully agree that as a bare minimum rclone's behavior should be self-consistent (within reasonable efforts).

It is my impression that this has already been discussed and recorded in the GitHub issues I listed above.

My best advice is to upvote the GitHub issues that best address your issues, and if needed supplement them with a link to this thread.

I have noted that some of these GitHub issues are several years old and marked by “help wanted”. It also looks like some tried and then stranded. I guess other things had higher priority (or were more interesting).

at this point in time, i have done my testing, made my mistakes, and learned what to expect from rclone.
i workaround what i have to.

we have to pick our rclone battles.
so far i have had three major issues with rclone and s3. this being one of them.
for the other two, that i could not workaround, @ncw worked with me, in long, complicated posts to fix them.
good enough for me.

My observation was't meant as a critique of you, ncw or anybody else - all do an excellent job :ok_hand:

I am happy too and actually moved from a paid product because rclone was more stable/usable in the areas I needed :slightly_smiling_face:

that thought never crossed my mind, sorry if i gave that impression.

My thoughts exactly. Rclone isn't perfect, but it is very good.

Time to be optimistic, here is a code change ready for review that I missed in my first search:

@wanderer You look experienced, so I guess you know the git/GitHub tricks to do an early trial. You can find some Go setup/build guidance in the rclone contribution guide.

1 Like

Sorry to be absent from this thread until now!

S3 is a key value store - it assigns no meaning to the keys - they are just strings as far as it is concerned.

Rclone however interprets those keys as file name paths, and S3 gives a little help with that with its delimeter searching so you can break things at /.

That scheme works very well for files, but it doesn't work for directories. In rclone and s3, directories are an emergent property of files - if there is a file called a/b/c then there will be a directory called a/b.

The side effect of this is that it is impossible to have empty directories.

Some tools (not rclone yet) choose to mark each directory with an empty object. So in the example above, there would be a 0 sized object called a/b/ and another called a/.

rclone knows enough to ignore these 0 sized objects, but it doesn't create or remove them itself. This causes the "Empty Directories Remain" problem as originally outlined.

This is also the problem fixed by the above pull request. I just gave it another review - it is almost ready for merge so we'll get it into 1.57 with any luck!

1 Like

@ole, I'm just an amateur - I don't know the github tricks you speak of. I am decent with tech but by no means a master coder or computer programmer. Thanks for the compliment though!

Thanks for the reply @ncw. I'm using a workaround now, but it'll be nice when the empty directory support is added in a new version of rclone. Thanks for adding that in and making it more user friendly!

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.