Potential rclone bug? Copying larger than ~1G file to crypt remote results in error message " remote error: tls: bad record MAC"

This just started happening a few weeks ago when a lot of large, (larger than 1g) files started resulting in the error stated in the title. This happens somewhat randomly in the transferring process, happening around ~50% into the way of completing the copy. I’m currently in the process of using an older version of rclone to see how it goes(version 1.39, go1.9.2) I’m using windows x64 and it is a google drive crypt remote. I’ve searched the forums and couldn’t find any concrete solution or cause of this problem.

command:
rclone copy E:\(local file path) gdrivecrypt:/ -v --transfers=2 --min-size 2G

Can you share copy logs with a -vv? I move large files every night from shows ranging 2-10GB to large movies 30-50GB to an encrypted remote.

Here’s the error with -vv, I left the name of the file out since it’s irrelevant.

“ERROR : (file name): Failed to copy: Post https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType&uploadType=resumable&upload_id=AEnB2UrSioLYx-BUjNsbFMqyYPkgfhjX6aMNbJsmhvIFi8O1jvdFynpc_xcDIm-2u5dQ9tR6Xt3xemepGMp7V_ka9fTCQQK_0g: remote error: tls: bad record MAC”

the lines above and below are just normal debug lines, with “sending chunk… length…” information.
This happened around 18% into the copy of a 2.1G file.

I’d add any details to this bug as it looks like you:

What do you mean? Should I provide more information? I looked through the thread but didn’t find any solution

We haven’t found a solution for this yet. Ideally I’d like to be able to reproduce the problem with a simple go program then I can report it to the go team. It might be a hardware problem though causing corrupted ethernet packets.

If you retry the transfer does it go through?

Yes, after if fails, it skips to copy another file then retries the failed files, it doesn’t seem like the retried ones fail though. I have a combination of files larger than ~1G and less than 1G but overall there is ~10% error. As in rclone will report 10 errors in 100 transferred

So the problem happens about every 10GB of transfer - something like that?

I suspect this is either a kernel problem or a hardware problem…

Is it only with uploads or are downloads affected? Can you reproduce with any other program (eg curl to download a big file)?

I can try downloading like 50GB with rclone, I’m not sure how to do that with other programs though, since maybe it’s something wrong with decrypt/encrypt? There wouldn’t be any point downloading encrypted files right? Also I noticed that smaller files also sometimes fail (~200MB), I probably just didn’t notice it since it’s easier to see larger files fail.

That specific error is corruption in network transfer detected by the TLS/https layer. So downloading files with https using curl should reproduce it if it is a hardware problem.