jamshid
(Jamshid)
October 30, 2024, 7:22pm
1
I can file an issue but thought I'd check first if there's a reason rclone does not use the DeleteObjects (DeleteObjects - Amazon Simple Storage Service ) api when deleting multiple objects, e.g. with rclone purge
.
A single POST
, with an xml body containing the list of keys/versions to delete, should be more efficient than making multiple individual DELETE
requests.
ncw
(Nick Craig-Wood)
October 31, 2024, 10:08am
2
The s3 backend should definitely be using that in its Purge implementation (which recursively deletes a directory).
Elsewhere the s3 backend could be doing that, but it would have to use lib/batcher which is reasonably complicated but not impossible.
What is your use case @jamshid ?
jamshid
(Jamshid)
October 31, 2024, 2:39pm
3
Thanks. No specific use case, just trying to improve performance of big delete jobs. I guess I'll file an issue.
1 Like
jamshid
(Jamshid)
November 1, 2024, 9:29pm
4
Filed the improvement request:
opened 09:28PM - 01 Nov 24 UTC
#### The associated forum post URL from `https://forum.rclone.org`
https://fo… rum.rclone.org/t/any-reason-rclone-does-not-use-s3-deleteobjects-when-deleting-multiple-objects/48476
#### What is your current rclone version (output from `rclone version`)?
```
rclone v1.68.1
- os/version: darwin 14.7.1 (64 bit)
- os/kernel: 23.6.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.23.1
- go/linking: dynamic
- go/tags: none
```
#### What problem are you are trying to solve?
Faster deletes e.g. when purging an S3 bucket.
#### How do you think rclone should be changed to solve that?
Use the S3 `DeleteObjects` api which is a single `POST` request that accepts a list of up to 1000 object keys/versions in the body. That should be faster than sending 1000 individual `DELETE` requests.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
Nick replied in the forum:
> The s3 backend should definitely be using that in its [Purge 1](https://github.com/rclone/rclone/blob/b9207e57274cd6e7c488c5bd751a058ba0f8b3b9/backend/s3/s3.go#L5466-L5471) implementation (which recursively deletes a directory).
> Elsewhere the s3 backend could be doing that, but it would have to use [lib/batcher 1](https://github.com/rclone/rclone/blob/master/lib/batcher/batcher.go) which is reasonably complicated but not impossible.
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.