Curious why v1.46 started using S3 signed urls for PUT

What is the problem you are having with rclone?

Hi there is no bug with rclone but just curious why v1.46 seems to have introduced signed URLs for PUTs instead of regular "Authorization: AWS*" auth.

What is your rclone version (output from rclone version)

(same behavior with latest 1.51.1)
bash-4.2# rclone-v1.46 --version
rclone v1.46
- os/arch: linux/amd64
- go version: go1.11.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

linux centos 7.7

Which cloud storage system are you using? (eg Google Drive)

S3-compatible Caringo Swarm

The command you were trying to run (eg rclone copy /tmp remote:tmp)

The command is correct, just curious if anyone remembers why this version v1.46 (and later) use signed S3 PUT urls? At least I think that's what I'm seeing. Not seeing a related change here:

rclone-v1.46 -vv --dump headers copy empty caringo:mybucket/empty-v1.46

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

bash-4.2# rclone-v1.46 -vv --dump headers copy empty caringo:mybucket/empty-v1.46
2020/04/26 00:16:13 DEBUG : rclone: Version "v1.46" starting with parameters ["rclone-v1.46" "-vv" "--dump" "headers" "copy" "empty" "caringo:mybucket/empty-v1.46"]
2020/04/26 00:16:13 DEBUG : Using config file from "/root/.rclone.conf"
...
2020/04/26 00:16:13 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/04/26 00:16:13 DEBUG : HTTP REQUEST (req 0xc000142300)
2020/04/26 00:16:13 DEBUG : PUT /mybucket/empty-v1.46/empty?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=3727cea2c7eacb6a335b7a56e047be19%2F20200426%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200426T001613Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=64efbc8b5c1b17426b1f0c16891879ea6707dc0037869ed8d521fd846f7ef150 HTTP/1.1
Host: backup69:8085
User-Agent: rclone/v1.46
Content-Length: 0
content-md5: 1B2M2Y8AsgTpgAmY7PhCfg==
content-type: application/octet-stream
x-amz-acl: private
x-amz-meta-mtime: 1587860137.587863
Accept-Encoding: gzip

2020/04/26 00:16:13 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/04/26 00:16:13 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/04/26 00:16:13 DEBUG : HTTP RESPONSE (req 0xc000142300)
2020/04/26 00:16:13 DEBUG : HTTP/1.1 403 Forbidden
Content-Length: 272
Content-Type: application/xml;charset=utf-8
Date: Sun, 26 Apr 2020 00:16:13 GMT
X-Amz-Request-Id: 788BE106794BA3B8

2020/04/26 00:16:13 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/04/26 00:16:13 ERROR : empty: Failed to copy: s3 upload: 403 Forbidden: <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Returning 403 Access Denied over missing x-amz-date and data headers: ldap caringoadmin@</Message><Resource>/mybucket/empty-v1.46/empty</Resource><RequestId>788BE106794BA3B8</RequestId></Error>

The older v1.45 seemed to use regular header-based S3 auth:

2020/04/26 00:16:29 DEBUG : rclone: Version "v1.45" starting with parameters ["rclone-v1.45" "-vv" "--dump" "headers" "copy" "empty" "caringo:mybucket/empty-v1.45"]
2020/04/26 00:16:29 DEBUG : Using config file from "/root/.rclone.conf"
...
2020/04/26 00:16:29 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/04/26 00:16:29 DEBUG : HTTP REQUEST (req 0xc0003e5d00)
2020/04/26 00:16:29 DEBUG : PUT /mybucket/empty-v1.45/empty HTTP/1.1
Host: backup69:8085
User-Agent: rclone/v1.45
Content-Length: 0
Authorization: XXXX
Content-Md5: 1B2M2Y8AsgTpgAmY7PhCfg==
Content-Type: application/octet-stream
X-Amz-Acl: 
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20200426T001629Z
X-Amz-Meta-Mtime: 1587860137.587863
Accept-Encoding: gzip

2020/04/26 00:16:29 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/04/26 00:16:29 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/04/26 00:16:29 DEBUG : HTTP RESPONSE (req 0xc0003e5d00)
2020/04/26 00:16:29 DEBUG : HTTP/1.1 200 OK
Content-Length: 0
Date: Sun, 26 Apr 2020 00:16:29 GMT
Etag: "d41d8cd98f00b204e9800998ecf8427e"
X-Amz-Request-Id: 29591E700336232C

2020/04/26 00:16:29 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

If I had to guess, I'd say it's probably was a change in

https://aws.amazon.com/sdk-for-go/

As that's what rclone uses.

It is a good question :slight_smile:

The actual change was done here

And here is the related issue

Before this change rclone used the s3manager in the s3 SDK to upload all files regardless of size. However this has the disadvangate that it uploads even small files with multipart upload which means they don't have an MD5SUM.

Unfortunately I couldn't find a way of doing a POST upload of a single part in the AWS SDK. At the point of upload rclone has an io.Reader but the upload code requires an io.ReadSeeker. (docs)

I think the seeking is used to read the object fully to calculate the hash and length, then it is rewound and uploaded. This is obviously undesirable for rclone as it would have to read the object twice and it might be coming over the network...

So @Animosity022 was correct it is because the SDK isn't quite flexible enough.

Are the signed URLs causing a problem for you?

Thank you! Those charts in https://github.com/rclone/rclone/issues/2772 on throughput with sizings and concurrency were very interesting. Got it, the S3 go sdk has PUT limitations that required the signed url PUT workaround, makes sense.

This only came up because an S3-compatible system had a regression with signed url PUTs which caused problems with some versions of rclone. Sadly we weren't testing the latest rclone but are now and "-vv --dump headers" made this easy to figure out.

Ah, interesting!

:slight_smile: --dump headers rocks for debugging!

Do you run the rclone integration tests against the backend? I can show you how to do that.. They are a good workout for an s3 backend - I've found several bugs in CEPH / Minio / Wasabi with them!

I had no idea never thought to check for integration tests.
Thanks this looks great, will give it a try soon:

Probably the easiest way of running the tests rclone runs is to do (from the root of the rclone source)

go install ./...
test_all -backends YourRemoteName:

This will chunter for a while then open an html page with test results which looks like this (but with less stuff on!)

https://pub.rclone.org/integration-tests/current/

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.