Some transfers to S3 fail unless `--s3-server-side-encryption aws:kms` is supplied

What is the problem you are having with rclone?

Same problem as described here.

Same solution works but this solution has some problems in my case (read on).

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.2
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-213-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy --s3-no-check-bucket  test.txt  s3:/my-bucket/

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3]
type = s3
env_auth = true
region = us-west-2
provider = AWS
acl = private
location_constraint = us-west-2
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

2025/05/02 17:52:32 DEBUG : rclone: Version "v1.69.2" starting with parameters ["rclone" "-vv" "copy" "--s3-no-check-bucket" "test.txt" "s3:/my-bucket/"]
2025/05/02 17:52:32 DEBUG : Creating backend with remote "test.txt"
2025/05/02 17:52:32 DEBUG : Using config file from "/home/dtenenba/.config/rclone/rclone.conf"
2025/05/02 17:52:32 DEBUG : fs cache: renaming child cache item "test.txt" to be canonical for parent "/home/dtenenba"
2025/05/02 17:52:32 DEBUG : Creating backend with remote "s3:/my-bucket/"
2025/05/02 17:52:32 DEBUG : s3: detected overridden config - adding "{Dn7qA}" suffix to name
2025/05/02 17:52:32 DEBUG : fs cache: renaming cache item "s3:/my-bucket/" to be canonical "s3{Dn7qA}:my-bucket"
2025/05/02 17:52:32 DEBUG : test.txt: Need to transfer - File not found at Destination
2025/05/02 17:52:32 DEBUG : test.txt: md5 = e19c1283c925b3206685ff522acfe3e6 (Local file system at /home/dtenenba)
2025/05/02 17:52:32 DEBUG : test.txt: md5 = c0a79ef5f7f4521da78c0482d120ec7b (S3 bucket my-bucket)
2025/05/02 17:52:32 ERROR : test.txt: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "c0a79ef5f7f4521da78c0482d120ec7b"
2025/05/02 17:52:32 INFO  : test.txt: Removing failed copy
2025/05/02 17:52:32 ERROR : Attempt 1/3 failed with 1 errors and: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "c0a79ef5f7f4521da78c0482d120ec7b"
2025/05/02 17:52:32 DEBUG : test.txt: Need to transfer - File not found at Destination
2025/05/02 17:52:32 DEBUG : test.txt: md5 = e19c1283c925b3206685ff522acfe3e6 (Local file system at /home/dtenenba)
2025/05/02 17:52:32 DEBUG : test.txt: md5 = 7618e7c55d86b4382fd31d3db8489781 (S3 bucket my-bucket)
2025/05/02 17:52:32 ERROR : test.txt: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "7618e7c55d86b4382fd31d3db8489781"
2025/05/02 17:52:32 INFO  : test.txt: Removing failed copy
2025/05/02 17:52:32 ERROR : Attempt 2/3 failed with 1 errors and: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "7618e7c55d86b4382fd31d3db8489781"
2025/05/02 17:52:32 DEBUG : test.txt: Need to transfer - File not found at Destination
2025/05/02 17:52:32 DEBUG : test.txt: md5 = e19c1283c925b3206685ff522acfe3e6 (Local file system at /home/dtenenba)
2025/05/02 17:52:32 DEBUG : test.txt: md5 = 1bfa954c21e8f72467687acbaac4d8be (S3 bucket my-bucket)
2025/05/02 17:52:32 ERROR : test.txt: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "1bfa954c21e8f72467687acbaac4d8be"
2025/05/02 17:52:32 INFO  : test.txt: Removing failed copy
2025/05/02 17:52:32 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "1bfa954c21e8f72467687acbaac4d8be"
2025/05/02 17:52:32 INFO  :
Transferred:   	         45 B / 45 B, 100%, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         0.3s

2025/05/02 17:52:32 DEBUG : 6 go routines active
2025/05/02 17:52:32 NOTICE: Failed to copy: corrupted on transfer: md5 hashes differ src(Local file system at /home/dtenenba) "e19c1283c925b3206685ff522acfe3e6" vs dst(S3 bucket my-bucket) "1bfa954c21e8f72467687acbaac4d8be"

The problem

I know this can be solved by adding --s3-server-side-encryption aws:kms, however we are using rclone in an automated context where we do not know in advance if the bucket we are copying to is encrypted or not.

I also know that adding either of these will cause the transfer to work:

--ignore-checksum
--s3-upload-cutoff=0

Obviously the first is bad and I am not sure of the implications of the second.
Is there anything I need to be aware of if I use that flag for all transfers to S3, regardless of the size of the file(s) to be transferred?

Also, why is this not a problem in the AWS CLI (which does not need to know if a target bucket is encrypted or not) but it affects rclone?

Our automated program could perhaps query the bucket metadata and find out if it is encrypted, but the user may not have permission to do that (a lot of users can't create or list buckets either which is why I have --s3-no-check-bucket in the command).

TIA

really, should not matter for file transfers. will force all file transfers to be multi-part.
by default, any file over 200MiB, is always transferred as multi-part.

The docs are a little confusing. They say:

Any files larger than this will be uploaded in chunks of chunk_size.

I'm not clear what is meant by chunk_size. Does it mean the value of --s3-upload-cutoff? In that case, we would be uploading chunks of 0 bytes which makes no sense. I assume it's referring to the value of --s3-chunk-size?

Thanks...

yeah, it is very confusing, as s3 is perhaps the most feature-rich complex backend and the required reading is from multiple webpages from multiple flags.
https://rclone.org/s3/#multipart-uploads
https://rclone.org/s3/#multipart-uploads-1

on the other hand, how much does it matter the exact method used to upload a file?
in most cases, rclone is pretty good at saturating internet connections.


if you really want to know exactly what rclone is doing in any specific case, read the debug log.
rclone copy 1GiB.file remote:bucket -vv --retries=1 ...
and a deeper look at the api calls --dump=headers

Thanks, it seems that --s3-upload-cutoff=0 is a good solution, BUT

There's still (kind of) a problem. Adding that flag causes the upload to ultimately succeed, but not before printing out some scary looking messages to the screen.

Because we have made a UI for rclone that is used by many users who are not versed in its intricacies, they will see these messages and get concerned.

Here is what it looks like to upload a file (I've taken out the -vv to better compare to what the user will see:

> rclone  copy --s3-upload-cutoff=0   --s3-no-check-bucket  test.txt  s3:/my-bucket/
2025/05/03 12:39:08 ERROR : test.txt: Failed to copy: multipart upload corrupted: Etag differ: expecting 7689e7cb9a872871f466f8d6c999906f-1 but got dc932625dad5921e20fc9dc167df3d9e-1
2025/05/03 12:39:08 ERROR : Attempt 1/3 failed with 1 errors and: multipart upload corrupted: Etag differ: expecting 7689e7cb9a872871f466f8d6c999906f-1 but got dc932625dad5921e20fc9dc167df3d9e-1
2025/05/03 12:39:08 ERROR : Attempt 2/3 succeeded

You and I know that the upload went fine, but there's scary words like ERROR and failed and corrupted. We are going to get people contacting us wondering if everything is ok. (If it matters, the file uploaded was 15 bytes in size; maybe results would be different with larger files.)

Also, it seems to indicate that what was uploaded somehow does not match the original source file, but that is not true:

> rclone md5sum test.txt
e19c1283c925b3206685ff522acfe3e6  test.txt
> rclone md5sum s3:/my-bucket/test.txt
e19c1283c925b3206685ff522acfe3e6  test.txt

Again, I could add --ignore-checksum, but that's not great practice. Plus, our UI also wraps the above rclone md5sum functionality to allow the user to make sure their transfer was not corrupted, and when you add --ignore-checksum it removes the md5sum from what was uploaded, breaking that:

> rclone md5sum s3:/my-bucket/test.txt
>

I know that passing the ARN of the KMS key would solve this, but for the moment the client does not have that information - it would be a substantial change to our UI app to support this, and right now we just need to unblock the users with KMS-encrypted buckets.

Long story short, can I somehow suppress the scary output that is produced when --s3-upload-cutoff=0 is used?