Inconsistent cryptcheck EOF error (Google Drive)

What is the problem you are having with rclone?

After a big (i.e. 10TiB) copy of files from my college Google drive crypt remote to my personal Google Workspace shared drive crypt remote, I'm running cryptcheck between the two remotes to confirm the transfer went through properly.

There was one folder on which I ran cryptcheck that returned 2 errors (although it appears this was only on 1 file, with 2 errors per file [1 from each remote, presumably?]).
The error was "error computing hash: failed to hash data: unexpected EOF". There was also a message saying "retrying may help" (please see log excerpt below)

Following the advice, I retried and ran cryptcheck again on the very same subdirectory. This time, it completed without issue.

Questions
1) Should I assume everything is alright?
2) Was this just some network/rate limiting-type issue that caused a hiccup affecting the cryptcheck? Or is it likely something more serious?
3) Is there anyway to find out what exactly caused the error?

Run the command 'rclone version' and share the full output of the command.

(This version of rclone is very recent, as the latest version of rclone at the time of posting is v1.58.1)
rclone v1.58.0

  • os/version: slackware 14.2+ (64 bit)
  • os/kernel: 5.10.28-Unraid (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.8
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone cryptcheck "source-crypt-remote:directory/subdirectory" "dest-crypt-remote:directory/subdirectory" --one-way --bwlimit 80M -vv

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Error on first attempt:

2022/05/02 19:00:46 NOTICE: Encrypted drive 'dest-crypt-remote:Directory/Subdirectory': 2 differences found
2022/05/02 19:00:46 NOTICE: Encrypted drive 'dest-crypt-remote:Directory/Subdirectory': 2 errors while checking
2022/05/02 19:00:46 NOTICE: Encrypted drive 'dest-crypt-remote:Directory/Subdirectory': 3 matching files
2022/05/02 19:00:46 INFO  : 
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Errors:                 2 (retrying may help)
Checks:                 4 / 4, 100%
Elapsed time:     36m58.7s

2022/05/02 19:00:46 DEBUG : 7 go routines active
2022/05/02 19:00:46 Failed to cryptcheck with 2 errors: last error was: error computing hash: failed to hash data: unexpected EOF

Result on second attempt:

2022/05/02 19:45:30 NOTICE: Encrypted drive 'dest-crypt-remote:Directory/Subdirectory': 0 differences found
2022/05/02 19:45:30 NOTICE: Encrypted drive 'dest-crypt-remote:Directory/Subdirectory': 4 matching files

If rclone check completes with no errors, your data is intact.

unexpected EOF is most likely an SSL stream breaking, so its a network hiccup of some kind.

Rclone was reading from the remote file at the time. You might get more info with -vv. I think maybe crypt has obscured the underlying error, but I'm not sure.

If rclone check completes with no errors, your data is intact.

Does this mean as long as a check/cryptcheck was run on a file that returned no errors at least once, then that file's data is intact?

For example, I just ran cryptcheck twice on a directory with 8 files. The first time, all but File 3 returned no errors. The second time, all but File 5 returned no errors (the errors were the same unexpected EOF error as before).

So, since all files have returned no errors at least once, I can rest assured my data is intact?

unexpected EOF is most likely an SSL stream breaking, so its a network hiccup of some kind.

Is there anyway to make this less likely to happen? Or is the only option when getting this error to rerun and hope it doesn't happen again on the same file?

Cryptcheck requires reading and re-encrypting the source to see if the hash matches the destination.

So this is reading the entire source.

I suspect cryptcheck could to with some retry protection.

Try rclone check --download instead as that does have retry protection.

So, I've now gotten a new error: connection reset by peer.
Again, this only happened on some files and not others. And again, when I reran the cryptcheck, I got an 'OK' result on affected files.

Is this also some sort of rate-limiting error?

And as for my earlier question: at the risk of sounding dumb, would you be able to give me a simple YES or NO: if I (re)run cryptcheck and get back an 'OK' result (i.e. no errors), should I be assured that the file(s) in question is/are intact (regardless of whether or not the file had earlier returned one/both of the errors I've mentioned so far)?

Thanks for your help so far!

That's another networking error

Yes :slightly_smiling_face:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.