AWS S3 - does killing (ctrl-c) a sync command call "abort" during multipart upload?

What is the problem you are having with rclone?

When killing (via control - c) an rclone sync command, while it's conducting a multipart upload to AWS S3, does it automatically call abort?

I see in the docs, there is --s3-leave-parts-on-error, which seems to imply abort is called upon error by default.

What happens when the user manually kills a multipart upload command using control - c? Does it call abort on a multipart upload?

What about when the user

What is your rclone version (output from rclone version)

rclone v1.52.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04.2 LTS (Bionic Beaver), 64-bit

Which cloud storage system are you using? (eg Google Drive)

AWS S3 (with storage_class = DEEP_ARCHIVE)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync

The rclone config contents with secrets removed.

type = s3
provider = aws
region = us-west-2
location_constraint = us-west-2
storage_class = DEEP_ARCHIVE

I think if you press CTRL-C then rclone will not abort the upload.

The upload will be aborted if anything else goes wrong.

I should probably fix this...

1 Like

I think that would be a good feature. What happens when one presses CTRL-C then?

From reading AWS's Multipart Upload Overview (I would link it, but this forum is denying me that power), uploads that are not aborted or completed will continue to accrue charges.

Is there some way to get the Upload ID? If I can get that, I can manually abort the uploads using awscli.

Can you please make a new issue on github about cancelling the multipart upload on CTRL-C - I'd like to implement this.

Also you could mention on the issue wanting to know the multipart ID - we should be able to log it for you. You may see it with -vv I'm not sure.

Thanks

1 Like

Just made the issue. Sadly, I don't have permission to share links, so it's issue #4300

Also, I was able to figure out the UploadID's of the un-aborted uploads using the awscli s3api's list-multipart-uploads. I manually aborted them, so all's well per those uploads being still alive.

I think this would a nice addition, and thank you for answering me Nick! :smiley:

rclone is SUPER useful, and a major life saver!

You can share links now as well.

1 Like

Thanks

I could get rclone cleanup to do this like I did for the qinqstor backend. It would be useful for B2 also.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.