YEM
September 3, 2020, 9:54pm
1
Problem:
When I copy to an S3 remote, it fails with this error
2020/09/03 17:20:51 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "d57f66dee17fef1d018429a223c859c6"
If I drag and drop it to a mount, Windows says "Copying 1 Item". It completes with no errors, but it does not show up in S3 (due to the Hash error obviously). Well, it shows for a split second in S3, then is removed.
This remote was working before, but now I can't copy anything to this remote. Even an empty file will not transfer.
The files are not open or being edited! I'm 100% sure.
Rclone Version:
rclone v1.51.0
- os/arch: windows/amd64
- go version: go1.13.7
Remote Storage Type = S3
OS = Windows 10 64 bit
Command:
rclone copy C:\rclone\test2.txt my-remote:my-bucket\code
config:
[my-remote]
type = s3
provider = AWS
env_auth = true
region = us-east-2
access_key_id =
secret_access_key =
Logs:
2020/09/03 17:47:47 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "copy" "C:\\rclone\\test2.txt" "my-remote:my-bucket\\code" "-vv"]
2020/09/03 17:47:47 DEBUG : Using RCLONE_CONFIG_PASS password.
2020/09/03 17:47:47 DEBUG : Using config file from "c:\\rclone\\rclone.conf"
2020/09/03 17:47:48 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 (Local file system at //?/C:/rclone)
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = a0cf909c96db861f0897d378a6969ea3 (S3 bucket my-bucket path code)
2020/09/03 17:47:48 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "a0cf909c96db861f0897d378a6969ea3"
2020/09/03 17:47:48 INFO : test2.txt: Removing failed copy
2020/09/03 17:47:48 ERROR : Attempt 1/3 failed with 1 errors and: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "a0cf909c96db861f0897d378a6969ea3"
2020/09/03 17:47:48 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 (Local file system at //?/C:/rclone)
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = c8e0c720281caea9520a59df13057176 (S3 bucket my-bucket path code)
2020/09/03 17:47:48 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "c8e0c720281caea9520a59df13057176"
2020/09/03 17:47:48 INFO : test2.txt: Removing failed copy
2020/09/03 17:47:48 ERROR : Attempt 2/3 failed with 1 errors and: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "c8e0c720281caea9520a59df13057176"
2020/09/03 17:47:48 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 (Local file system at //?/C:/rclone)
2020/09/03 17:47:48 DEBUG : test2.txt: MD5 = 507929946992dd22bd7583d7b536d966 (S3 bucket my-bucket path code)
2020/09/03 17:47:49 ERROR : test2.txt: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "507929946992dd22bd7583d7b536d966"
2020/09/03 17:47:49 INFO : test2.txt: Removing failed copy
2020/09/03 17:47:49 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "507929946992dd22bd7583d7b536d966"
2020/09/03 17:47:49 Failed to copy: corrupted on transfer: MD5 hash differ "f127881627718748c34ec1ce7888d6f9" vs "507929946992dd22bd7583d7b536d966"
asdffdsa
(jojothehumanmonkey)
September 3, 2020, 9:58pm
2
hello and welcome to the forum,
the latest stable is v1.53.0, and be found here https://rclone.org/downloads/
can you update and test again?
YEM
September 3, 2020, 10:13pm
3
Hi,
Thank you. I am now running the latest version and it still has the issue:
rclone v1.53.0
- os/arch: windows/amd64
- go version: go1.15
Error message and logs are the same.
YEM
September 3, 2020, 10:42pm
5
Yes, actually. I think you're on the right track! I thought maybe that was handled automatically, because it was working before. (I'm pretty sure my other bucket that is working is also encrypted, but not sure b/c I don't have console access on that one).
I have added the ARN to the config file.
type = s3
provider = AWS
env_auth = true
region = us-east-2
access_key_id =
secret_access_key =
server_side_encryption = aws:kms
sse_kms_key_id = arn:aws:kms:us-east-2:222secretstuff13:key/12303300-somerandomstuffsupersecret
Same error though. Darn, I thought that would fix it!
thoughts?
asdffdsa
(jojothehumanmonkey)
September 3, 2020, 10:43pm
6
YEM
September 3, 2020, 10:51pm
7
ok, thank you. If I do this:
rclone copy C:\rclone\test2.txt my-remote:my-bucket\code --ignore-checksum
It does work. However, how do I get around the issue for a mount? I want to be able to mount this bucket and drag/drop files to it.
Will this work?:
rclone mount my-remote: z: --ignore-checksum
?
Will the setting persist?
I'm using nssm.exe to mount all of my buckets when my system boots. So the drag / drop files is really what I'm after.
Thanks!
asdffdsa
(jojothehumanmonkey)
September 3, 2020, 10:54pm
8
YEM:
Will this work?
try it͏͏͏͏͏͏͏͏͏
Amazon S3
Amazon S3
Amazon S3
YEM
September 3, 2020, 11:10pm
9
Ok, the --ignore-checksum
flag works for mount as well.
So really we're just masking some bug, eh?
It doesn't seem like the missing kms key was the issue. To prove that I wasn't crazy that it was working before (with no KMS key configured), I have removed the encryption settings from the config and it is working fine... with only adding the --ignore-checksum
. The file are intact in the AWS GUI.
asdffdsa
(jojothehumanmonkey)
September 3, 2020, 11:19pm
10
this is not a bug, as rclone is working as expected based on the documentation.
however,
perhaps setting --s3-upload-cutoff=0
is a work around, perhaps you can test that.
rclone fails to move small files to s3 buckets with default encryption enabled · Issue #1824 · rclone/rclone · GitHub
"We noticed that once we enabled default encryption on s3 buckets, small files failed to move with rclone. Once a file is large enough to transfer via multipart uploads, the problem goes away "
"rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff
. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files )."
YEM
September 4, 2020, 12:20am
11
Thank you, I appreciate the support.
My new config file is like this
type = s3
provider = AWS
env_auth = true
region = us-east-2
upload_cutoff = 0
I can now copy files without using the --ignore-checksum
flag. Yay!
I'm using the default KMS key for the bucket, so there is no need to specify in the config.
asdffdsa
(jojothehumanmonkey)
September 4, 2020, 12:26am
12
can you run the your original command ans post the output?
rclone copy C:\rclone\test2.txt my-remote:my-bucket\code -vv
YEM
September 4, 2020, 1:13am
13
C:\Users\me>rclone copy C:\rclone\test2.txt my-remote:my-bucket\code -vv
2020/09/03 21:09:38 DEBUG : rclone: Version "v1.53.0" starting with parameters ["rclone" "copy" "C:\rclone\test2.txt" "my-remote:my-bucket\code" "-vv"]
2020/09/03 21:09:38 DEBUG : Creating backend with remote "C:\rclone\test2.txt"
2020/09/03 21:09:38 DEBUG : Using config file from "c:\rclone\rclone.conf"
2020/09/03 21:09:38 DEBUG : fs cache: adding new entry for parent of "C:\rclone\test2.txt", "//?/C:/rclone"
2020/09/03 21:09:38 DEBUG : Creating backend with remote "my-remote:my-bucket\code"
2020/09/03 21:09:39 DEBUG : fs cache: renaming cache item "my-remote:my-bucket\code" to be canonical "my-remote:my-bucket/code"
2020/09/03 21:09:39 DEBUG : test2.txt: Need to transfer - File not found at Destination
2020/09/03 21:09:39 DEBUG : test2.txt: multipart upload starting chunk 1 size 58 offset 0/58
2020/09/03 21:09:39 DEBUG : test2.txt: MD5 = f127881627718748c34ec1ce7888d6f9 OK
2020/09/03 21:09:39 INFO : test2.txt: Copied (new)
2020/09/03 21:09:39 INFO :
Transferred: 58 / 58 Bytes, 100%, 97 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.1s
2020/09/03 21:09:39 DEBUG : 6 go routines active
ncw
(Nick Craig-Wood)
September 4, 2020, 6:10am
14
This really needs a better fix. If the above worked would that work for you?
YEM
September 4, 2020, 2:53pm
15
Let me make sure I understand your question. Are you asking if upload_cutoff
= 0 is an acceptable solution for me? Yes, I think that's fine. I'm more comfortable with that than disabling checksum!
asdffdsa
(jojothehumanmonkey)
September 4, 2020, 3:04pm
16
can you try @ncw fix and post the log, thanks
remove upload_cutoff = 0
add server_side_encryption = aws:kms
ncw
(Nick Craig-Wood)
September 4, 2020, 5:13pm
17
I don't think this will work yet, I was just querying whether it was usable or not.
@YEM are there any idiots guides on how to enable kms for a bucket? Or maybe an aws command line? I'll give it a go myself if I can work out how!
YEM
September 4, 2020, 5:32pm
18
I usually do it in the Management Console / GUI.
Click a bucket name
Click the Properties tab
You'll see a card on the top row called "Default Encryption". Click it.
Huge radio button for AWS-KMS
There will always be one default KMS key, even if you have never created your own. It's funny, it is an AWS managed Customer Managed Key.
Using AWS CLI, see this link: put-bucket-encryption — AWS CLI 1.36.31 Command Reference
They give an example of building the json, then sending the json with CLI.
asdffdsa
(jojothehumanmonkey)
September 4, 2020, 5:35pm
19
when you ran the commands before:
server side encryption was enabled?
and
not using kms?
correct?
YEM
September 4, 2020, 5:46pm
20
My bucket has been configured to use KMS encryption on the AWS side from the beginning. But no mention of it in Rclone config.
Even now I still have the same setup, with only modifying the upload-cutoff
.
During our troubleshooting, I did temporarily add server_side_encryption = aws:kms
and the key.
That didn't work so I removed it. It automatically uses that anyway, as it is taking the default from the bucket config. It is the same when I use python boto3 or CLI, you only need to specify the KMS key if you want to use one other than the bucket's default.