After a big (i.e. 10TiB) copy of files from my college Google drive crypt remote to my personal Google Workspace shared drive crypt remote, I'm running cryptcheck between the two remotes to confirm the transfer went through properly.
There was one folder on which I ran cryptcheck that returned 2 errors (although it appears this was only on 1 file, with 2 errors per file [1 from each remote, presumably?]).
The error was "error computing hash: failed to hash data: unexpected EOF". There was also a message saying "retrying may help" (please see log excerpt below)
Following the advice, I retried and ran cryptcheck again on the very same subdirectory. This time, it completed without issue.
Questions 1) Should I assume everything is alright? 2) Was this just some network/rate limiting-type issue that caused a hiccup affecting the cryptcheck? Or is it likely something more serious? 3) Is there anyway to find out what exactly caused the error?
Run the command 'rclone version' and share the full output of the command.
(This version of rclone is very recent, as the latest version of rclone at the time of posting is v1.58.1)
rclone v1.58.0
os/version: slackware 14.2+ (64 bit)
os/kernel: 5.10.28-Unraid (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.17.8
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
If rclone check completes with no errors, your data is intact.
unexpected EOF is most likely an SSL stream breaking, so its a network hiccup of some kind.
Rclone was reading from the remote file at the time. You might get more info with -vv. I think maybe crypt has obscured the underlying error, but I'm not sure.
If rclone check completes with no errors, your data is intact.
Does this mean as long as a check/cryptcheck was run on a file that returned no errors at least once, then that file's data is intact?
For example, I just ran cryptcheck twice on a directory with 8 files. The first time, all but File 3 returned no errors. The second time, all but File 5 returned no errors (the errors were the same unexpected EOF error as before).
So, since all files have returned no errors at least once, I can rest assured my data is intact?
unexpected EOF is most likely an SSL stream breaking, so its a network hiccup of some kind.
Is there anyway to make this less likely to happen? Or is the only option when getting this error to rerun and hope it doesn't happen again on the same file?
So, I've now gotten a new error: connection reset by peer.
Again, this only happened on some files and not others. And again, when I reran the cryptcheck, I got an 'OK' result on affected files.
Is this also some sort of rate-limiting error?
And as for my earlier question: at the risk of sounding dumb, would you be able to give me a simple YES or NO: if I (re)run cryptcheck and get back an 'OK' result (i.e. no errors), should I be assured that the file(s) in question is/are intact (regardless of whether or not the file had earlier returned one/both of the errors I've mentioned so far)?