`rclone md5sum` on a large Encrypted GDrive dirtree is generating multiple errors

That command doesn't work though:

rclone md5sum gcrypt:
2023/03/03 17:08:24 ERROR : two/one/hosts: hash unsupported: hash type not supported
2023/03/03 17:08:24 ERROR : one/one/hosts: hash unsupported: hash type not supported
2023/03/03 17:08:24 ERROR : hosts: hash unsupported: hash type not supported
2023/03/03 17:08:24 Failed to md5sum with 6 errors: last error was: hash unsupported: hash type not supported

Are you running something else?

It does if you pass the option --download as I depicted in my original post (I've ellipsed it on the 'pseudo-command' I posted last in order to keep things simple).

I had no idea what you "..."ed out as I saw the original command and was trying to compare as I try to not assume things..

So you are downloading the files, recalculating the md5sums for them.

Got it.

Good luck!

Not me, rclone :slight_smile:

Actually, just plain "calculating" -- no "re" as there was no previous calculation for most of them. For some I do have separate ".md5" files in the same directory as the files and calculated in the host system before the directory was uploaded to the remote. I will eventually check those ".md5" files, but right now my priority is to do a general check in my new (destination) account ASAP before my old (source) EDU account bites the dust for good -- it's already in "read-only" mode.

Thanks! Looks like I'm going to need it -- I've been seeing a general "shitification" of Google in general and Drive service specifically, along the last few years and accelerating as of late, so I just hope I can take my data out of there before it completes that process :wink:

Rclone does what you ask it to do as it's not sentient yet..

Semantics but on the remote, they have md5sums and they were calculated when uploaded, compared and stored so you are recalculating but we're splitting hairs at this point as I see your point of view :slight_smile:

I'm super insanely happy I moved to Dropbox. I'd pay a bit more for their service if it goes up as no quotas and it just generally works all the time for me. I figured at some point Google would catch up and enforce things but the quality of the service with the posts I've been reading lately is just so bad.

1 Like

I'm trying to avoid doing other things with this remote in parallel to this rclone md5sum to so as not to 'rock the boat', but I just did an rclone cat for some of these files and they all aborted with the same exact error -- so these are probably permanent errors.

This is looking more and more to be the case.

And all these files were checked when they were first put into Google Drive, about 6 years ago -- therefore the corruption happened while they were stored in Google Drive.

It seems Google Drive has been quietly 'eating' my data... haven't destroyed much so far ('only' 19 files in the ~6.5M md5sum'ed so far is about 0.0002%), but this is one more incentive to get my data the F out of there ASAP.

I don't know what that (Bad Request, failedPrecondition) means! Did a file change?

I was pretty sure this particular file has not changed since I stored it many years ago, and I'm even more sure now after checking the file's modTime in the subjacent remote:


       8743504 2017-07-09 16:02:03.000000000 REDACTED103

That is, Google Drive itself reports the (base) file as unmodified since 2017... so we can affirm that the file has not changed during the execution of the above rclone md5sum that reported this failedPrecondition error.

The good news is that I just tried calculating that file's data again:

    9de005985fda7216b4985d2a37f57f90  REDACTED22.JPG

So the error isn't permanent. Perhaps just some piece of crap stuck in one of Google's 'tubes' but has since then got loose?

Clock skew is very improbable on my end, as:

  1. the machine runs ntpd
  2. This ntpd has been in sync since December 10th; from this time and the time the error occurred (Feb 19th) there were no logs of anything going on with ntpd, so it almost certainly remained in sync (yes, I log and save everything that happens at this machine at syslog level DEBUG).

Therefore the only chance for any clock skew would be in Google's end -- which I don't think is significantly probable either.

The good news is, just like the other single error above, this one also seems to have been transient:

     0fe0e04974dcf14a65e3e67375a1fa84  REDACTED38.pdf

So, yay! :wink: It seems Google has indeed only eaten 19 of my files so far :frowning:
(all of which I also have on two different local disks -- so they can easily be restored)

When this is over, I will come back and report a final tally (and also post a "PSA: Google Drive is silently corrupting files!" topic to warn the unwary).

First of all, I apologize if I sound like I'm splitting hairs -- I'm only trying to leave the record straight for those that will eventually come a-googling. \

What I meant is, the MD5 calculations are being done by rclone and not by me -- as it would be if I were downloading the files and calculating it by myself eg with rclone cat FILE | md5sum - as I've done a lot of times in the past -- including before rclone md5sum was implemented (yes, I've been using rclone for that long).

IMHO "Semantics" is very important, as it concerns the meaning itself of the words -- but I agree with you, let's move any further discussion about this to PMs as I think the record is straight enough by now.

You know, you are my personal hero re: alternatives to Google since, a few months back, you pointed me towards Cloudflare email routing -- which has been working just about perfectly for me since then. So your recommending Dropbox carries a ton of weight for me. OTOH, I've been reading all sorts of things about Dropbox -- and some seem to indicate it's not really 'unlimited' anymore. But this getting too much off-topic -- I will create a new topic for that and tag you there.


There are still some of us who don't have any issues :wink:

1 Like

I'm glad to hear it, and I hope it continues to work well for you. Just be aware that the ox in the slaughterhouse line, when another ox two or three places in front of it gets felled by the hammer, could also be thinking just about the same thing, ie "nothing has happened to me so far, so no reason to worry!" :slight_smile:

Hmm, that is very bad.

How big are these files (roughly)?

The crypt format is broken up into 64k chunks and I have a version of rclone which will write zeroes for the chunks with errors, but otherwise carry on, so we could investigate how much of the file is corrupted if you want.

If it is something like a video file then a 64k chunk lost will cause a minor visual artifact. If it is something less resilient then a 64k chunk missing might make the whole file useless.

Good news on the retries.

I look forward to the final score!

Here's a sorted list of the sizes for these 'bad password' files so far (21 files now, up from 19 as 2 more have cropped up since my last message): http://durval.com/xfer-only/20230306_rclone_bad_password_file_sizes.txt

So they are sized from ~9.5MB all the way to ~10GB, with about half of them over 1GB.

Yeah, it would be nice to have a look at these files and try to understand what happened. For all of them so far I have local copies so I can cmp -b the F'ed version against the good one and have an idea as to the extension/location of the damage.

If it is something like a video file then a 64k chunk lost will cause a minor visual artifact. If it is something less resilient then a 64k chunk missing might make the whole file useless.

Here's a | sort | uniq -c list of their extensions:

  1 avi
  2 gz
  1 JPG
  3 mkv
  5 MOV
  1 mp4
  1 pdf
  6 slob
  1 vmdk

So, 10 of them are video files which should be mostly viewable (as long as the error areas don't include any critical headers or markers within the files), and the others are less resilient formats which would be completely unusable if I didn't have local copies for them.

I will be sure to keep this topic posted!

And BTW, thanks again for all your great help, and for making rclone available in the first place.

I just ^C'ed stressapptest after a little over 67h running, and the result was:

^CLog: User exiting early (1409823927 seconds remaining)
Stats: Found 0 hardware incidents
Stats: Completed: 5766200320.00M in 241479.67s 23878.62MB/s, with 0 hardware incidents, 0 errors
Stats: Memory Copy: 5766200320.00M at 23878.65MB/s
Stats: File Copy: 0.00M at 0.00MB/s
Stats: Net Copy: 0.00M at 0.00MB/s
Stats: Data Check: 0.00M at 0.00MB/s
Stats: Invert Data: 0.00M at 0.00MB/s
Stats: Disk: 0.00M at 0.00MB/s

Status: PASS - please verify no corrected errors

So, I think it's reasonably safe to assume the current machine isn't experiencing any sort of memory corruption.

This has a new flag --crypt-pass-bad-blocks which will output blocks which couldn't be authenticated as 64k of 0s.

Its possible if bytes have been added/removed to the file that it will output an endless stream of errors, but hopefully it will just be a small number of blocks.

v1.62.0-beta.6770.65ab5b70c.fix-crypt-badblocks on branch fix-crypt-badblocks (uploaded in 15-30 mins)

Looks good to me :slight_smile:

1 Like

Thanks, just downloaded versions for both machine architectures I use, and already tested the arm64 here and it seems to be working.

My 1st run, on the smallest of those files, doesn't look very promising: got the "crypt: ignoring: failed to authenticate decrypted block - bad password?" 49 times in a row -- which IIUC means 49 * 64K= ~3.1MB of the file is lost... and that on a ~9.5GB file :frowning: doesn't look like the "small number of blocks" hypothesis is going to hold :frowning:

Will do more tests when able (treading carefully because I'm still waiting for that month-long rclone md5sum to finish running on this same remote) and will post the results here.

Thanks Again!

Ouch 3 MB of corruptions... This could be 1 corrupted bit every 64k bytes but that is a fair burst of errors.

At least it didn't have corruptions all the way to the end of the file.

:frowning: my thoughts exactly. this doesn't bode well for Google Drive's data integrity... :frowning:

As I have local copies of all the original files (at least so far, fingers crossed), I can do a comparison after all that's over with; my plan is, for each of those files:


  2. use rclone (...) copyto to copy my known-good local file to PATH/FILE.EXT

  3. use rclone cryptdecode --reverse to find the encrypted name for both files;

  4. download both files from the base (unencrypted) remote, and then compare them bit-by-bit to see exactly what the corruption is.

@ncw, what do you think? Would that work? The above depends on the same file being uploaded again to the same encrypted remote and generating the same encrypted content on the base (unencrypted) remote. Is that the case?

This isn't the case normally. Rclone will generate a random nonce (as it is known in cryptography) for each encryption, so each time a file is encrypted is is different. In fact it weakens the crypto if you re-use the nonce.

It is however possible to do exactly what you want, it would require a modified rclone though which you could say - read the nonce from this file before encrypting this other file with it. That could be a backend command quite easily though.

1 Like

Thanks for the explanation. Of course :man_facepalming:, the right thing to do is to use a random nounce.

That could be a backend command quite easily though

Or perhaps some option like --crypt-use-nounce-from CRYPT_REMOTE:PATH/FILE.

But I think it's a very specific case and I think I already abused my privilege as an rclone user too much to ask that of any developer, much less from you who has helped me so greatly in all of this.

But if the itch to dig deeper into this is bad enough, and I have the leisure for it, I might try implementing it and submitting a PR. Not sure this will be the case, tho -- have my plate quite full at the moment, and my knowledge of Golang is still extremely superficial


I am supposed to be getting the v1.62 release ready so trying not to get sidetracked with interesting crypto and data corruption problems!

It would certainly be easy to bodge it into rclone

You'd alter this bit of code

To enter a fixed nonce. You can read the nonce from the header of the file as detailed here: Crypt - it is bytes 8-31 inclusive in the file.

1 Like