Rclone serve s3 fails MultiPartUploads if auth enabled

I'm trying the rclone serve s3 functionality and am finding that uploads large enough to trigger a CreateMultipartUpload result in an access denied error.

version:

# rclone version
rclone v1.65.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.0-1017-aws (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.6
- go/linking: static
- go/tags: none

config.ini

# cat /config.ini
[serves3]
type = s3
provider = Rclone
endpoint = http://0.0.0.0:8080/
use_multipart_uploads = false
disable_multipart_uploads = true

server:

rclone serve s3 --config /config.ini --auth-key xxxxxxx,yyyyyyy -vvvv --s3-disable-checksum  /data

Uploading a small (40MB) file works fine:

# aws s3 cp ./smallfile s3://uploads/ --endpoint-url=http://localhost:8080
2024/01/24 19:03:33 DEBUG : serve s3: CREATE OBJECT: uploads smallfile
2024/01/24 19:03:33 DEBUG : uploads/smallfile: OpenFile: flags=O_RDWR|O_CREATE|O_TRUNC, perm=-rw-rw-rw-
2024/01/24 19:03:33 DEBUG : uploads/smallfile: Open: flags=O_RDWR|O_CREATE|O_TRUNC
2024/01/24 19:03:33 DEBUG : uploads: Added virtual directory entry vAddFile: "smallfile"
2024/01/24 19:03:33 DEBUG : uploads/smallfile: >Open: fd=uploads/smallfile (w), err=<nil>
2024/01/24 19:03:33 DEBUG : uploads/smallfile: >OpenFile: fd=uploads/smallfile (w), err=<nil>
2024/01/24 19:03:33 DEBUG : uploads: Added virtual directory entry vAddFile: "smallfile"
2024/01/24 19:03:33 DEBUG : uploads/smallfile: md5 = 62e30d71405e813abeb11e39e4ab285a OK
2024/01/24 19:03:33 DEBUG : uploads/smallfile: Size and md5 of src and dst objects identical
2024/01/24 19:03:33 DEBUG : uploads: Added virtual directory entry vAddFile: "smallfile"
upload: ./smallfile to s3://uploads/smallfile

Uploading a 500MB file fails:

# aws s3 cp ./medfile s3://uploads/ --endpoint-url=http://localhost:8080
2024/01/24 19:04:49 INFO  : serve s3: Access Denied: 127.0.0.1:42064 => /uploads/medfile?uploads
upload failed: ./medfile to s3://uploads/medfile An error occurred () when calling the CreateMultipartUpload operation:

I know the docs says:

Multipart server side copies do not work (see #7454). These take a very long time and eventually fail. The default threshold for multipart server side copies is 5G which is the maximum it can be, so files above this side will fail to be server side copied.

but this is a client -> server copy. If I remove the --auth-key setting from the rclone serve call then everything works fine.

# aws s3 cp ./medfile s3://uploads/ --endpoint-url=http://localhost:8080
2024/01/24 19:27:00 DEBUG : serve s3: initiate multipart upload uploads medfile
2024/01/24 19:27:00 DEBUG : serve s3: put multipart upload uploads medfile 1
2024/01/24 19:27:00 DEBUG : serve s3: put multipart upload uploads medfile 1
....

I can replicate this problem.

It looks like this URL is failing

2024/01/25 12:17:59 INFO  : serve s3: Access Denied: 127.0.0.1:54708 => /test/1G?uploads

Which looks like a ListMultipartUploads request

I can see listing multipart uploads does work when I try with rclone when there is a multipart upload in progress:

$ rclone backend list-multipart-uploads serves3:test2/
{
	"test2": [
		{
			"ChecksumAlgorithm": null,
			"Initiated": "2024-01-25T12:23:19.185Z",
			"Initiator": null,
			"Key": "1G",
			"Owner": null,
			"StorageClass": "STANDARD",
			"UploadId": "1"
		}
	]
}

Though if there are no uploads in progress it gives an error which I'm not sure is correct

$ rclone backend list-multipart-uploads serves3:test/
2024/01/25 12:24:11 Failed to backend: command "list-multipart-uploads" failed: list multipart uploads bucket "test" key "": NoSuchUpload: NoSuchUpload
	status code: 404, request id: 6C3C85901C9FA2BA, host id: NkMzQzg1OTAxQzlGQTJCQTZDM0M4NTkwMUM5RkEyQkE2QzNDODU5MDFDOUZBMkJBNkMzQzg1OTAxQzlGQTJCQQ==

This is what AWS does when there are no uploads in progress.

$ rclone backend list-multipart-uploads s3:rclone-dst
{
	"rclone-dst": []
}

So I think that is a bug, but not the one you are seeing.

I think this is caused by a bug in the aws s3 command which is querying /test/1G?uploads when it should be querying /test?uploads

The docs do not seem allow that form of URL

Request Syntax

GET /?uploads&delimiter=Delimiter&encoding-type=EncodingType&key-marker=KeyMarker&max-uploads=MaxUploads&prefix=Prefix&upload-id-marker=UploadIdMarker HTTP/1.1
Host: Bucket.s3.amazonaws.com
x-amz-expected-bucket-owner: ExpectedBucketOwner
x-amz-request-payer: RequestPayer

This might be because it is using the old fashioned path-style access.

We could add a workaround for this in rclone serve s3 - I think this would need a fix in GitHub - Mikubill/gofakes3: A simple fake AWS S3 object storage though.

Thanks for replicating. I agree it looks like we're dealing with different issues. I tested with s3cmd and it gave a bit more detailed info

# s3cmd put ./bigfile s3://uploads/
2024/01/25 18:50:24 INFO  : serve s3: Access Denied: 127.0.0.1:56682 => /uploads/?location
ERROR: Error parsing xml: Malformed error XML returned from remote server..  ErrorXML: b'<?xml version="1.0" encoding="UTF-8"?>\n<errorResponse><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message></errorResponse>'

The signature calculation error likely explains why I can upload large files with auth disabled.

Would you like to submit an issue with what I'm seeing?

I think that is probably a good idea - thank you :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.