So, cryptcheck is insanely slow on googledrive, not sure exactly why; but I realized; if I have the harddrive space available instead of rclone copy localfiles: cryptedremote:
I could rclone copy localfiles: localcrypt:
Then rclone move undecrypted parent folder of localcrypt: to gdremote:crypt
This would save massively on api requests to googledrive.
The files would end up being encrypted once, moved to googledrive once, and could compare more readily available hashes in their undecrypted form. I just figured I’d write back about this idea, since it took me this long to think of it.
I’m not sure though why cryptcheck is so slow, it seems like cryptcheck is actually slower than a command like
rclone copy localdata: cryptedremote:
Maybe as much as ten times slower. Which doesn’t really make sense to me, comparing the hashes should be about as fast as creating the encryption in the first place shouldn’t it?
Even if my estimate of ten times slower is off, even two or three times slower seems odd to me. Maybe I don’t fully understand how cryptcheck works. Doesn’t it just encrypt the local file in memory and then compare that hash to the remote hash? is google slow to return their hashes? (no that can’t be it, because rclone --checksum move to googledrive is reasonably quick)…
edit1: This is probably normal, but I have noticed that cryptcheck gives -vv feedback like this:
I assuming it’s supposed to list each filename twice, because it’s confirm both the local and remote file have the OK hash? Although if it was just checking each check twice over that would explain why it’s twice as slow(or slower) as my creation of encrypted remotes were in the first place.
edit3: nevermind it really wasn’t that slow, it was just slow to get started, maybe did the files in a different order?