Can't delete empty folders GCP bucket (not drive)

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

Can't delete empty folders.

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2

  • os/version: Microsoft Windows Server 2019 Datacenter 1809 (64 bit)
  • os/kernel: 10.0.17763.4974 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.21.3
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Cloud Storage (not google drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone.exe rmdirs remote:bucketname\server\Databases\20230728\subfolder --log-level DEBUG

output (of relevance):

2023/11/03 19:08:18 DEBUG : removing 1 level 0 directories
2023/11/03 19:08:18 INFO  : GCS bucket bucketname path server/Databases/20230728/subfolder: Removing directory
2023/11/03 19:08:18 DEBUG : 4 go routines active

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[remote]
type = google cloud storage
service_account_file = D:\DBA\rclone\service-account.json
object_acl = bucketOwnerFullControl
bucket_acl = private
bucket_policy_only = true
location = nam4
storage_class = COLDLINE
env_auth = false
project_number = XXX
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

output (of relevance):

2023/11/03 19:08:18 DEBUG : removing 1 level 0 directories
2023/11/03 19:08:18 INFO  : GCS bucket bucketname path server/Databases/20230728/subfolder: Removing directory
2023/11/03 19:08:18 DEBUG : 4 go routines active

rclone indicates the folder was deleted, but it still exists when listing, or looking directly in the bucket in GCP.

I've tried all the methods I could find to delete one folder, rclone purge, rclone rmdir, rclone rmdirs, rclone delete, or rclone mount as a local disk and doing simple cmd.exe rd on the folder I'm trying to delete. The local cache in rclone mount indicated the "deleted" folder was not found, but if I flush the cache or just re-mount the same bucket, the folder reappears.

It never shows an error with -vv or -log-lever DEBUG, never shows "folder is not empty". The commands DO delete all files in the folder path as expected, but it refuses to delete the empty folders.

I also tried the various --delete-after type commands, --rmdirs, etc. They all indicate that the folder was deleted, no errors, but it doesn't get deleted.

Is there another --command to make it work?

Side note, I'm switching to rclone from old CloudBerry Drive (no longer supported, latest .net patches broke core functionality, etc). rclone does everything I need it to and performs way better/faster, the only thing I'm stuck on it getting a folder to actually delete. The same simple cmd.exe rd commands worked fine since ~2015 with CloudBerry in AWS and in GCP, not sure what I'm missing.

welcome to the forum,

as per rclone docs,
"Empty folders are unsupported for bucket based remotes"

i know with AWS S3 and clones, when using their website, a marker is created.
also, most other third-party S3 tools also create that marker.

in the past and still by default, rclone does not create markers.
over the years, this has been much discussed/debated in the forum.

so might just manually delete them and going forward use --gcs-directory-markers

i can tell you from experience, the mount really does not like empty folders.

fwiw, i dumped cloudberry for veeam, over five years ago.
veeam community editions are free and much more powerful than rclone.

Ok, --gcs-directory-markers might be what I need, I'll test next week. It won't help with my existing bucket (PB/billions of files). I've used veeam a while ago, didn't much care for it. I wrote my own solution for what I need, works fine for my role.

I'll reply if --gcs-directory-markers works going forward next week. Thank you!

well, at least we agree about cloudberry ;wink

perhaps use rclone to do a server-side move to a new bucket.
tho really, it is a server-side copy then delete.

and did you see
https://rclone.org/overview/#optional-features

rclone backend features remote:
should show
"CanHaveEmptyDirectories": false"

Yes confirmed, no empty dirs. Not sure how CloudBerry did it, I don't see any hidden files or obvious weirdness.

I might be able to do a hybrid using old CloudBerry just for purges for 6 months to cover the old ones pre --gcs-directory-markers, should be good enough for short term.

Yeah just to confirm, I was able to reinstall CloudBerry just for the old folder purge part, that will be good enough short term. The latest .NET patches broke uploads in CloudBerry (broke the headers sent to GCP). I'll test the new folders from rclone after the weekend (nothing new to create till the next day). Thanks again for the help!

I got new folders created over the weekend with new uploads using rclone and --gcs-directory-markers, unfortunately I don't see any change in the folders or files, and a cascade delete of a folder only deletes the files in the subfolders, it still refuses to purge the empty folders.

Any other ideas?

In the meantime I am able to use old CloudBerry to do the purges of older files and folders without any issues. But who knows how long before that breaks.

I noticed that --gcs-directory-markers defaults to false, so I'm hoping just adding literally " --gcs-directory-markers" to the command is still defaulting to false. I added --gcs-directory-markers=true and will see what happens with new folders overnight.

No change, and new folders still refuse to be actually deleted from rclone.

--gcs-directory-markers=true doesn't seem to change anything from what I can tell.

Anyone else know what I might be missing here? I wouldn't think deleting one empty folder would be this difficult. :slight_smile:

edit: here comes @ncw to the rescue!

Do an rclone rmdir on a single empty folder with --gcs-directory-markers=true -vv --dump bodies --retries 1 and post the output. That should show us what the command is doing.

Semi-redacted for private bits, looks like it's getting a 404 not found?

2023/11/09 16:28:45 DEBUG : rclone: Version "v1.64.2" starting with parameters ["rclone.exe" "rmdir" "remote:bucket/server/type/20231109/FOLDER-TO-DELETE" "--gcs-directory-markers=true" "-vv" "--dump" "bodies" "--retries" "1"]
2023/11/09 16:28:45 DEBUG : Creating backend with remote "remote:bucket/server/type/20231109/FOLDER-TO-DELETE"
2023/11/09 16:28:45 DEBUG : Using config file from "D:\\rclone\\rclone.conf"
2023/11/09 16:28:45 DEBUG : remote: detected overridden config - adding "{juk_h}" suffix to name
2023/11/09 16:28:45 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2023/11/09 16:28:45 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/11/09 16:28:45 DEBUG : HTTP REQUEST (req 0xc000740100)
2023/11/09 16:28:45 DEBUG : POST /token HTTP/1.1
Host: oauth2.googleapis.com
User-Agent: rclone/v1.64.2
Content-Length: 824
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip

assertion={redacted}
2023/11/09 16:28:45 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/11/09 16:28:45 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/11/09 16:28:45 DEBUG : HTTP RESPONSE (req 0xc000740100)
2023/11/09 16:28:45 DEBUG : HTTP/2.0 200 OK
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Thu, 09 Nov 2023 16:28:45 GMT
Server: scaffolding on HTTPServer2
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0

{"access_token":"{redacted}","expires_in":3599,"token_type":"Bearer"}
2023/11/09 16:28:45 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/11/09 16:28:45 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/11/09 16:28:45 DEBUG : HTTP REQUEST (req 0xc000741400)
2023/11/09 16:28:45 DEBUG : GET /storage/v1/b/bucket/o/server%2Ftype%2F20231109%2FFOLDER-TO-DELETE?alt=json&prettyPrint=false HTTP/1.1
Host: storage.googleapis.com
User-Agent: rclone/v1.64.2
Authorization: XXXX
X-Goog-Api-Client: gl-go/1.21.3 gdcl/0.134.0
Accept-Encoding: gzip

2023/11/09 16:28:45 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/11/09 16:28:45 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/11/09 16:28:45 DEBUG : HTTP RESPONSE (req 0xc000741400)
2023/11/09 16:28:45 DEBUG : HTTP/2.0 404 Not Found
Content-Length: 271
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json; charset=UTF-8
Date: Thu, 09 Nov 2023 16:28:45 GMT
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: UploadServer
Vary: Origin
Vary: X-Origin
X-Guploader-Uploadid: {redacted}

{"error":{"code":404,"message":"No such object: bucket/server/type/20231109/FOLDER-TO-DELETE","errors":[{"message":"No such object: bucket/server/type/20231109/FOLDER-TO-DELETE","domain":"global","reason":"notFound"}]}}
2023/11/09 16:28:45 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/11/09 16:28:45 DEBUG : fs cache: renaming cache item "remote:bucket/server/type/20231109/FOLDER-TO-DELETE" to be canonical "remote{juk_h}:bucket/server/type/20231109/FOLDER-TO-DELETE"
2023/11/09 16:28:45 INFO  : GCS bucket bucket path server/type/20231109/FOLDER-TO-DELETE: Removing directory
2023/11/09 16:28:45 DEBUG : 4 go routines active

Hmm, I think removing a directory marker if it is specified in the root is broken :frowning:

I tried removing the directory by mounting the parent and issuing an rmdir and that did work!

2023/11/09 17:31:46 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/11/09 17:31:46 DEBUG : empty/: Removing directory marker
2023/11/09 17:31:46 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/11/09 17:31:46 DEBUG : HTTP REQUEST (req 0xc000542e00)
2023/11/09 17:31:46 DEBUG : DELETE /storage/v1/b/rclone-dirmarkers/o/empty%2F?alt=json&prettyPrint=false HTTP/1.1
Host: storage.googleapis.com
User-Agent: rclone/v1.65.0-beta.7468.23ab6fa3a
Authorization: XXXX
X-Goog-Api-Client: gl-go/1.20.1 gdcl/0.148.0
Accept-Encoding: gzip

2023/11/09 17:31:46 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/11/09 17:31:46 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/11/09 17:31:46 DEBUG : HTTP RESPONSE (req 0xc000542e00)
2023/11/09 17:31:46 DEBUG : HTTP/2.0 204 No Content
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json
Date: Thu, 09 Nov 2023 17:31:46 GMT
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: UploadServer
Vary: Origin
Vary: X-Origin
X-Guploader-Uploadid: ABPtcPoChQe7Gr8WiLpdHlUGbQMmbNgqIvFSuBHHY7NdU8ExxpjunZLrfSOZfxLEGdkhh7BobAmfwVlKrg

How have you been trying to delete the directory?

I've tried all the methods using rclone by itself, and with mounting the bucket as a local disk and using simple rd [path] /s /q or using old fashioned filesystemobject folder.delete. It shows deleted in local cache, but the folders persist no matter how I try to delete.

I can still use the mounted volume for discovery and kick off individual rclone direct deletes if that works. But this is a LOT of deletes every day (old backup folders going back 6 months), like a rolling window of deletes kind of thing. Not sure if you had success mounting as a local disk and deleting? (this would be difficult to scale I suspect on my side doing one by one mount/delete/unmount).

Also in my bucket, the folder structure looks like this:

bucket -> server -> date -> type -> folder (and multiple files under this last folder)

There's multiple server, multiple date, multiple type. I'm mapping and using the bucket root for sure.