Purge doesn't handle aborted uploads on Cloudflare

What is the problem you are having with rclone?

Purge doesn't work, terminates with BucketNotEmpty.

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.1
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-192-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

s3 / Cloudflare

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone purge \
  --config=/dev/null \
  :s3:/abcdef

The rclone config contents with secrets removed.

no config

A log from the command with the -vv flag

2022/11/19 01:53:30 DEBUG : Setting default for s3-provider="Cloudflare" from environment variable RCLONE_S3_PROVIDER
2022/11/19 01:53:30 DEBUG : Setting default for s3-access-key-id="x" from environment variable RCLONE_S3_ACCESS_KEY_ID
2022/11/19 01:53:30 DEBUG : Setting default for s3-secret-access-key="y" from environment variable RCLONE_S3_SECRET_ACCESS_KEY
2022/11/19 01:53:30 DEBUG : Setting default for s3-endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "purge" "--config=/dev/null" "-vv" ":s3:/v"]
2022/11/19 01:53:30 DEBUG : Creating backend with remote ":s3:/v"
2022/11/19 01:53:30 DEBUG : Using config file from ""
2022/11/19 01:53:30 DEBUG : Setting s3_provider="Cloudflare" from environment variable RCLONE_S3_PROVIDER
2022/11/19 01:53:30 DEBUG : Setting s3_access_key_id="x" from environment variable RCLONE_S3_ACCESS_KEY_ID
2022/11/19 01:53:30 DEBUG : Setting s3_secret_access_key="y" from environment variable RCLONE_S3_SECRET_ACCESS_KEY
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : :s3: detected overridden config - adding "{q9EX4}" suffix to name
2022/11/19 01:53:30 DEBUG : Setting s3_provider="Cloudflare" from environment variable RCLONE_S3_PROVIDER
2022/11/19 01:53:30 DEBUG : Setting s3_access_key_id="x" from environment variable RCLONE_S3_ACCESS_KEY_ID
2022/11/19 01:53:30 DEBUG : Setting s3_secret_access_key="y" from environment variable RCLONE_S3_SECRET_ACCESS_KEY
2022/11/19 01:53:30 DEBUG : Setting s3_endpoint="https://z.r2.cloudflarestorage.com" from environment variable RCLONE_S3_ENDPOINT
2022/11/19 01:53:30 DEBUG : fs cache: renaming cache item ":s3:/v" to be canonical ":s3{q9EX4}:v"
2022/11/19 01:53:30 DEBUG : S3 bucket v: bucket is versioned: false
2022/11/19 01:53:30 DEBUG : Waiting for deletions to finish
2022/11/19 01:53:31 ERROR : Attempt 1/3 failed with 1 errors and: BucketNotEmpty: The bucket you tried to delete (v) is not empty (account z).
  status code: 409, request id: , host id:
2022/11/19 01:53:31 DEBUG : Waiting for deletions to finish
2022/11/19 01:53:31 ERROR : Attempt 2/3 failed with 1 errors and: BucketNotEmpty: The bucket you tried to delete (v) is not empty (account z).
  status code: 409, request id: , host id:
2022/11/19 01:53:31 DEBUG : Waiting for deletions to finish
2022/11/19 01:53:31 ERROR : Attempt 3/3 failed with 1 errors and: BucketNotEmpty: The bucket you tried to delete (v) is not empty (account z).
  status code: 409, request id: , host id:
2022/11/19 01:53:31 DEBUG : 4 go routines active
2022/11/19 01:53:31 Failed to purge: BucketNotEmpty: The bucket you tried to delete (v) is not empty (account z).
  status code: 409, request id: , host id:

rclone backend list-multipart-uploads finds a multipart upload, so it definitely can list that upload, it just doesn't seem to delete it before purge.

hi,

might want
rclone backend cleanup --dry-run
Remove unfinished multipart uploads

Yes, that would work, with -o max-age=0. I don't understand though, why is this not part of purge? How can you possibly purge a bucket without running cleanup first?

I think this is a deviation from S3 compatibility. When you do this on AWS it stores the uploaded parts somewhere else and not in the destination bucket, so you don't need to remove them to purge the bucket.

Eg

(note that I kill -9 rclone from another terminal otherwise it deletes the multipart uploads on cleanup)

Upload and kill

$ rclone copy -vv  1G s3:rclone-test-bucket-oh-yea
2022/11/19 13:09:45 DEBUG : rclone: Version "v1.60.1-DEV" starting with parameters ["rclone" "copy" "-vv" "1G" "s3:rclone-test-bucket-oh-yea"]
2022/11/19 13:09:45 DEBUG : Creating backend with remote "1G"
2022/11/19 13:09:45 DEBUG : Using config file from "/home/ncw/.rclone.conf"
2022/11/19 13:09:45 DEBUG : fs cache: adding new entry for parent of "1G", "/tmp"
2022/11/19 13:09:45 DEBUG : Creating backend with remote "s3:rclone-test-bucket-oh-yea"
2022/11/19 13:09:45 DEBUG : 1G: Need to transfer - File not found at Destination
2022/11/19 13:09:46 INFO  : S3 bucket rclone-test-bucket-oh-yea: Bucket "rclone-test-bucket-oh-yea" created with ACL "private"
2022/11/19 13:09:48 DEBUG : 1G: multipart upload starting chunk 1 size 5Mi offset 0/1Gi
2022/11/19 13:09:48 DEBUG : 1G: multipart upload starting chunk 2 size 5Mi offset 5Mi/1Gi
2022/11/19 13:09:48 DEBUG : 1G: multipart upload starting chunk 3 size 5Mi offset 10Mi/1Gi
2022/11/19 13:09:48 DEBUG : 1G: multipart upload starting chunk 4 size 5Mi offset 15Mi/1Gi
2022/11/19 13:09:54 DEBUG : 1G: multipart upload starting chunk 5 size 5Mi offset 20Mi/1Gi
Killed

See multipart uploads

$ rclone backend list-multipart-uploads s3:rclone-test-bucket-oh-yea
{
	"rclone-test-bucket-oh-yea": [
		{
			"ChecksumAlgorithm": null,
			"Initiated": "2022-11-19T13:09:49Z",
			"Initiator": {
				"DisplayName": "rclone",
				"ID": "arn:aws:iam::071155266422:user/rclone"
			},
			"Key": "1G",
			"Owner": {
				"DisplayName": null,
				"ID": "bdda7f5ee2c061a28f824d1890f570ff4e32f13811267fa8c19ac4dd2a7bd6c5"
			},
			"StorageClass": "STANDARD",
			"UploadId": "0QQ3qfGHTKDOblswR0MziO98rQ55bwL68He7lTDYsAMDVBjUo4VcnuFr9jKrZHrqD_w72SQFLtinjRZQS8IV_5fSSBhvzcQjmsjQNLovVfmOXUKWuZqBiSJNn9jr6Rvq"
		}
	]
}

Purge bucket is OK

$ rclone -vv purge s3:rclone-test-bucket-oh-yea
2022/11/19 13:10:39 DEBUG : rclone: Version "v1.60.1-DEV" starting with parameters ["rclone" "-vv" "purge" "s3:rclone-test-bucket-oh-yea"]
2022/11/19 13:10:39 DEBUG : Creating backend with remote "s3:rclone-test-bucket-oh-yea"
2022/11/19 13:10:39 DEBUG : Using config file from "/home/ncw/.rclone.conf"
2022/11/19 13:10:40 DEBUG : S3 bucket rclone-test-bucket-oh-yea: bucket is versioned: false
2022/11/19 13:10:40 DEBUG : Waiting for deletions to finish
2022/11/19 13:10:40 INFO  : S3 bucket rclone-test-bucket-oh-yea: Bucket "rclone-test-bucket-oh-yea" deleted
2022/11/19 13:10:40 DEBUG : 4 go routines active
$ 

hi,

according to
https://rclone.org/overview/#optional-features
s3 does not support purge?

tho last night, i did a test and it worked, same as you

image

@ncw this was on AWS s3?

All backends support rclone purge, the ones in that table just have a single API call to do it rather than iterating the remote.

Yes on AWS.

I didn't try any other S3 compatibles though.

perhaps the docs should make that clear.

--- that all backends support purge,
--- specify which backends use a single api call,
--- specify which backends iterate thru every dir/file, using a api call per deleted object.

So it must be a specific thing for Cloudflare? Still, if the purge needs to go through each file/dir, shouldn't the cleanup also be part of the command?

I think so.

It would be possible to do this for provider = Cloudflare of course. I wonder if they will fix this deviation from the defactor standard though.

You could try reporting it to them and see what they say.

The docs say this about the Purge feature which kind of implies the above but could be clearer I agree.

Purge

This deletes a directory quicker than just deleting all the files in the directory.