Backup-dir - Files changed but not detected- So no HASHing done?

Hello,

My question is pretty simple, but needs short explanation:
I use rclone with the --backup-dir argument.
So with it I do a daily sync (and backup, when something gets deleted/changed - For emergencies).
Since I also want a backup/protection against for example ransomware/bitrot/silent corruptions i did a few tests:

I changed 1 character/bit in a file - but preserved the date -> So that then DATES & FILESIZE were still the same.
But the file content changed -> So off course also the MD5 HASH changed.
After I did that, i ran a sync with backup-dir again, but rclone did NOT detect my test-“corruption”/“change”.

So after that, i must assume, that NO HASH checking/comparing will be done when working with sync and --backup-dir, right?

Most Important Question: Is there a way to do also hash checking?
And which command is the one which is responsible for that ? It is the sync itself, which does not check HASHES (when DATE & FILESIZE are the same) OR has it something to do with the --backup-dir argument?

Would be very cool if someone could explain me that and confirm my “findings”.

What did I use?: Windows Server 2012 R2 x64, rClone v1.37, Sync with --backup-dir from LOCAL to GDrive (encrypted)

And btw. ABSOLUTELY GREAT piece of software! Thanks.

That is the critical clue! Unfortunately crypted remotes don’t support checksums which explains your results I think.

If you were doing the backup not via the crypt then MD5SUM would be checked. If you look at https://rclone.org/overview/ you can see which remotes support which hashes.

However all is not lost as you can run a special integrity check called rclone cryptcheck which checks local files against a remote checsums. It has to read the start of each remote file, then encrypt the local file then hash it so it is quite intensive but it can give you a proper double check.

Ah okay. I understand - so it really is the case that sync does not compare checkssums by design then.

But do you agree, that is really a problem in this scenario?
I mean yes I could run a seperate cryptcheck - but my benefit then only would be, that I know there is/was a corruption/change on some files, but I cant ’ do that while syncing and taking care of them …
So it is not doable to do a sync (to crypted remote) with also taking care of hashes.

Off course I absolutely would agree with you that this makes no sense as default, since it needs a lot of time.
But it should be possible for users who want really to be sure, that all the systems take care of file-integrity, all the time.
A optional “force cryptcheck while syncing” switch would be very useful for sync with crypted remote …
So really ALL changes then could be detected - and are handled correct and also will be backuped by backupdir - so I got peace of mind to have every version of a file, which is the reason for some who use --backup-dir.
What do you mean?

Not a bad idea. Please make a new issue on github with that idea in and I’ll have a think about it.

Another alternative is for crypt to save its own hashes which has been proposed in various issues alreay.