Unable to use the default server-side encryption key for an S3 bucket without duplicating the key ARN in the rclone configuration

What is the problem you are having with rclone?

I'm unable to use the default server-side encryption key for an S3 bucket without duplicating the key ARN in the rclone configuration.

We have an S3 bucket that with default encryption set to use a customer-managed KMS key. If I copy files to the bucket using rclone without any rclone-S3-encryption settings, the uploads fail with an error: corrupted on transfer: md5 hash differ "cf824ac185fd54fca87c25bd87860b09" vs "7b4516be31f363cd256a156009c98185".

Setting the following encryption settings works, but this requires duplicating the bucket's default key ARN:

server_side_encryption=aws:kms
sse_kms_key_id=<KEY-ARN> # This is the same as the bucket's default encryption key

If I just set server_side_encryption=aws:kms, the upload succeeds, but it uses the AWS-managed default S3 KMS encryption key instead of the customer key.

Is is possible to tell rclone that the bucket has default encryption, and use the bucket's default encryption key/settings?

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.1

  • os/version: darwin 12.5 (64 bit)
  • os/kernel: 21.6.0 (arm64)
  • os/type: darwin
  • os/arch: arm64
  • go/version: go1.18.5
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

  rclone --config "$config" \
    copyto -v \
    dummydata-1 \
    "s3:$BUCKET/$PREFIX/$config/dummydata-1"

The rclone config contents with secrets removed.

[s3]
type=s3
provider=AWS
# Required for EC2 IAM roles
env_auth=true
# Must set the region of the bucket
region=us-west-2
# Skip bucket check
no_check_bucket=true

server_side_encryption=aws:kms
sse_kms_key_id=<KEY-ARN>

A log from the command with the -vv flag

Without server_side_encryption and sse_kms_key_id:

2022/10/04 10:06:20 Failed to copyto: corrupted on transfer: md5 hash differ "cf824ac185fd54fca87c25bd87860b09" vs "7b4516be31f363cd256a156009c98185"

One other thing I tried was setting upload_cutoff=0 to force multi-part uploads. My understanding is that with multi-part uploads, rclone does not use the Etag, however I get errors about Etag differing and the upload succeeds after some retries.

[s3]
type=s3
provider=AWS
# Required for EC2 IAM roles
env_auth=true
# Must set the region of the bucket
region=us-west-2
# Skip bucket check
no_check_bucket=true

# Force multi-part uploads for all uploads
upload_cutoff=0
2022/10/04 10:55:17 ERROR : dummydata-1: Failed to copy: multipart upload corrupted: Etag differ: expecting 50c4ce155a832647d79d8c86752f00af-14 but got b75bc40eac20f7d54c7b6e024da86c16-14
2022/10/04 10:55:17 ERROR : Attempt 1/3 failed with 1 errors and: multipart upload corrupted: Etag differ: expecting 50c4ce155a832647d79d8c86752f00af-14 but got b75bc40eac20f7d54c7b6e024da86c16-14
2022/10/04 10:55:17 ERROR : Attempt 2/3 succeeded
2022/10/04 10:55:17 INFO  : 
Transferred:   	       70 MiB / 70 MiB, 100%, 2.337 MiB/s, ETA 0s
Checks:                 1 / 1, 100%
Elapsed time:        28.3s

The question that needs answering is how do you do this in the AWS S3 SDK? If we can figure that out, then we can figure out how to do it in rclone.

I spent some time trying to discover that, but I haven't succeeded yet - maybe you could have a look: s3 - Amazon Web Services - Go SDK

The logical place to do this would be to set something like sse_kms_key_id=default

Reading this, makes me think it isn't possible though

To require that a particular AWS KMS key be used to encrypt the objects in a bucket, you can use the s3:x-amz-server-side-encryption-aws-kms-key-id condition key. To specify the KMS key, you must use a key Amazon Resource Name (ARN) that is in the "arn:aws:kms:region:acct-id:key/key-id" format.

Note

When you upload an object, you can specify the KMS key using the x-amz-server-side-encryption-aws-kms-key-id header. If the header is not present in the request, Amazon S3 assumes that you want to use the AWS managed key. Regardless, the AWS KMS key ID that Amazon S3 uses for object encryption must match the AWS KMS key ID in the policy, otherwise Amazon S3 denies the request.

Rclone (relatively recently) started checking the Etag of multipart uploads too.

Would an rclone option that causes rclone to ignore the Etag and just use the custom X-Amz-Meta-Md5chksum metadata do the right thing for buckets with default encryption? With that option, rclone would not specify any encryption settings when uploading files.

Hmm, looking at the code, I think setting sse_customer_algorithm = AES256 should have that effect (don't set aws:kms). Reading more of the code, I think this might be the config you are looking for.

Thanks for looking into this. I tried setting sse_customer_algorithm=AES256, but I get a 400 BadRequest error when writing to the bucket.

$  rclone copyto -v dummydata-1 "s3:$BUCKET/$PREFIX/dummydata-1" 
...
2022/10/06 07:20:43 ERROR : Attempt 1/3 failed with 1 errors and: BadRequest: Bad Request
	status code: 400, request id: 

rclone.conf:

[s3]
type=s3
provider=AWS
env_auth=true
region=us-west-2
no_check_bucket=true
sse_customer_algorithm=AES256

rclone version:

rclone v1.59.2
- os/version: darwin 12.5 (64 bit)
- os/kernel: 21.6.0 (arm64)
- os/type: darwin
- os/arch: arm64
- go/version: go1.18.6
- go/linking: dynamic
- go/tags: cmount

Can you do the above but add -vv --dump bodies --retries 1 so we can see a bit more info? Better make your test file ASCII readable as it will appear in the log, something like a file with "hello" in it should do fine.

Thanks

I've run the test. Is there a way to send you the verbose log directly?

You can PM him or email works and link the forum post.

I've taken a look at the dump - thank you. Unfortunately it isn't particularly helpful :frowning:

Have you managed to upload objects using the aws s3 tool like this? If so then you can get the aws --debug s3 tool to dump its headers which would help to see what we are missing.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.