Corrupted on transfer: MD5 hash differ (AWS S3)

Problem:
When I copy to an S3 remote, it fails with this error

2020/09/03 17:20:51 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "d57f66dee17fef1d018429a223c859c6"

If I drag and drop it to a mount, Windows says "Copying 1 Item". It completes with no errors, but it does not show up in S3 (due to the Hash error obviously). Well, it shows for a split second in S3, then is removed.

This remote was working before, but now I can't copy anything to this remote. Even an empty file will not transfer.

The files are not open or being edited! I'm 100% sure.

Rclone Version:

rclone v1.51.0
- os/arch: windows/amd64
- go version: go1.13.7

Remote Storage Type = S3
OS = Windows 10 64 bit

Command:

rclone copy C:\rclone\test2.txt my-remote:my-bucket\code

config:

[my-remote]
type = s3
provider = AWS
env_auth = true
region = us-east-2
access_key_id = 
secret_access_key = 

Logs:

2020/09/03 17:47:47 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "copy" "C:\\rclone\\test2.txt" "my-remote:my-bucket\\code" "-vv"]
2020/09/03 17:47:47 DEBUG : Using RCLONE_CONFIG_PASS password.
2020/09/03 17:47:47 DEBUG : Using config file from "c:\\rclone\\rclone.conf"
2020/09/03 17:47:48 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 (Local file system at //?/C:/rclone)
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = a0cf909c96db861f0897d378a6969ea3 (S3 bucket my-bucket path code)
2020/09/03 17:47:48 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "a0cf909c96db861f0897d378a6969ea3"
2020/09/03 17:47:48 INFO  : test2.txt: Removing failed copy
2020/09/03 17:47:48 ERROR : Attempt 1/3 failed with 1 errors and: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "a0cf909c96db861f0897d378a6969ea3"
2020/09/03 17:47:48 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 (Local file system at //?/C:/rclone)
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = c8e0c720281caea9520a59df13057176 (S3 bucket my-bucket path code)
2020/09/03 17:47:48 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "c8e0c720281caea9520a59df13057176"
2020/09/03 17:47:48 INFO  : test2.txt: Removing failed copy
2020/09/03 17:47:48 ERROR : Attempt 2/3 failed with 1 errors and: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "c8e0c720281caea9520a59df13057176"
2020/09/03 17:47:48 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 (Local file system at //?/C:/rclone)
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = 507929946992dd22bd7583d7b536d966 (S3 bucket my-bucket path code)
2020/09/03 17:47:49 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "507929946992dd22bd7583d7b536d966"
2020/09/03 17:47:49 INFO  : test2.txt: Removing failed copy
2020/09/03 17:47:49 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "507929946992dd22bd7583d7b536d966"
2020/09/03 17:47:49 Failed to copy: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "507929946992dd22bd7583d7b536d966"

hello and welcome to the forum,

the latest stable is v1.53.0, and be found here https://rclone.org/downloads/
can you update and test again?

Hi,

Thank you. I am now running the latest version and it still has the issue:

rclone v1.53.0
- os/arch: windows/amd64
- go version: go1.15

Error message and logs are the same.

  • is the s3 using KMS key or some kind of server side encryption?
1 Like

Yes, actually. I think you're on the right track! I thought maybe that was handled automatically, because it was working before. (I'm pretty sure my other bucket that is working is also encrypted, but not sure b/c I don't have console access on that one).

I have added the ARN to the config file.

type = s3
provider = AWS
env_auth = true
region = us-east-2
access_key_id = 
secret_access_key = 
server_side_encryption = aws:kms
sse_kms_key_id = arn:aws:kms:us-east-2:222secretstuff13:key/12303300-somerandomstuffsupersecret

Same error though. Darn, I thought that would fix it!

thoughts?

check this out.
https://github.com/rclone/rclone/issues/1824

ok, thank you. If I do this:

rclone copy C:\rclone\test2.txt my-remote:my-bucket\code --ignore-checksum

It does work. However, how do I get around the issue for a mount? I want to be able to mount this bucket and drag/drop files to it.
Will this work?:

rclone mount my-remote: z: --ignore-checksum

?
Will the setting persist?
I'm using nssm.exe to mount all of my buckets when my system boots. So the drag / drop files is really what I'm after.

Thanks!

try it͏͏͏͏͏͏͏͏͏

https://rclone.org/s3/#s3-server-side-encryption
https://rclone.org/s3/#key-management-system-kms
https://rclone.org/s3/#s3-sse-kms-key-id

Ok, the --ignore-checksum flag works for mount as well.

So really we're just masking some bug, eh?

It doesn't seem like the missing kms key was the issue. To prove that I wasn't crazy that it was working before (with no KMS key configured), I have removed the encryption settings from the config and it is working fine... with only adding the --ignore-checksum. The file are intact in the AWS GUI.

this is not a bug, as rclone is working as expected based on the documentation.

however,
perhaps setting --s3-upload-cutoff=0 is a work around, perhaps you can test that.

https://github.com/rclone/rclone/issues/1824
"We noticed that once we enabled default encryption on s3 buckets, small files failed to move with rclone. Once a file is large enough to transfer via multipart uploads, the problem goes away"

https://rclone.org/s3/#multipart-uploads
"rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff . This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files)."

Thank you, I appreciate the support.

My new config file is like this

type = s3
provider = AWS
env_auth = true
region = us-east-2
upload_cutoff = 0

I can now copy files without using the --ignore-checksum flag. Yay!

I'm using the default KMS key for the bucket, so there is no need to specify in the config.

can you run the your original command ans post the output?

rclone copy C:\rclone\test2.txt my-remote:my-bucket\code -vv

C:\Users\me>rclone copy C:\rclone\test2.txt my-remote:my-bucket\code -vv
2020/09/03 21:09:38 DEBUG : rclone: Version "v1.53.0" starting with parameters ["rclone" "copy" "C:\rclone\test2.txt" "my-remote:my-bucket\code" "-vv"]
2020/09/03 21:09:38 DEBUG : Creating backend with remote "C:\rclone\test2.txt"
2020/09/03 21:09:38 DEBUG : Using config file from "c:\rclone\rclone.conf"
2020/09/03 21:09:38 DEBUG : fs cache: adding new entry for parent of "C:\rclone\test2.txt", "//?/C:/rclone"
2020/09/03 21:09:38 DEBUG : Creating backend with remote "my-remote:my-bucket\code"
2020/09/03 21:09:39 DEBUG : fs cache: renaming cache item "my-remote:my-bucket\code" to be canonical "my-remote:my-bucket/code"
2020/09/03 21:09:39 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 21:09:39 DEBUG : test2.txt: multipart upload starting chunk 1 size 58 offset 0/58
2020/09/03 21:09:39 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 OK
2020/09/03 21:09:39 INFO : test2.txt: Copied (new)
2020/09/03 21:09:39 INFO :
Transferred: 58 / 58 Bytes, 100%, 97 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.1s

2020/09/03 21:09:39 DEBUG : 6 go routines active

This really needs a better fix. If the above worked would that work for you?

Let me make sure I understand your question. Are you asking if upload_cutoff = 0 is an acceptable solution for me? Yes, I think that's fine. I'm more comfortable with that than disabling checksum!

can you try @ncw fix and post the log, thanks

  • remove upload_cutoff = 0
  • add server_side_encryption = aws:kms

I don't think this will work yet, I was just querying whether it was usable or not.

@YEM are there any idiots guides on how to enable kms for a bucket? Or maybe an aws command line? I'll give it a go myself if I can work out how!

I usually do it in the Management Console / GUI.

  • Click a bucket name
  • Click the Properties tab
  • You'll see a card on the top row called "Default Encryption". Click it.
  • Huge radio button for AWS-KMS
  • There will always be one default KMS key, even if you have never created your own. It's funny, it is an AWS managed Customer Managed Key. :slight_smile:

Using AWS CLI, see this link: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-encryption.html
They give an example of building the json, then sending the json with CLI.

when you ran the commands before:

  • server side encryption was enabled?
    and
  • not using kms?

correct?

My bucket has been configured to use KMS encryption on the AWS side from the beginning. But no mention of it in Rclone config.

Even now I still have the same setup, with only modifying the upload-cutoff.

During our troubleshooting, I did temporarily add server_side_encryption = aws:kms and the key.

That didn't work so I removed it. It automatically uses that anyway, as it is taking the default from the bucket config. It is the same when I use python boto3 or CLI, you only need to specify the KMS key if you want to use one other than the bucket's default.