I am syncing my local NAS stuff to OneDrive with rclone which is a really decent solution.
But I am fearing a Ransomware attack on my Windows NAS. When the sync job starts working I would end up to destroy my backup with encrypted data as well. So this would be useless as a protection against ransomware.
rsync is already checking (I guess with help of hashes) if the files have changed in order to decide what to sync and what not to sync. Furthermore you can already simulate what the results would by if you execute the command. Unfortunately the results can not be used for scripting because they are just text on the console.
My Feature request would be to run a check with a dryrun first. If the rsync sync would copy more than a configurable percentage of files, something seems to be really wrong. In this case I would love to have an exit value like "10 - Amount of changed files exceeding threshold".
If more than 2% of the files would be copied the exit code 10 is raised. So you could easily check inside Batch files or shell scripts if something is about to go terribly wrong and don't execute the real sync command in this case.
-> would also be nice if rclone could send an email in such cases (just dreaming)
Rclone solutions could be --max-transfer or you could build historical backups with --backup-dir which is probably the best solution. I do that for backing up media form my phone so when Google images deleted everything again I'll have a backup!
thanks for your suggestions. I appreciate that very much. I found the backup-dir option useful, but I am still unsure with one issue. What happens in case of ransomware attack when rclone would overwrite all tha data on my cloud drive and copy all the data I want to protect into backup-dir folder? I would for sure run out of cloud space quite quickly. Is the sync process stopped in case that there is insufficient space in the baclup-dir folder?
Wonderful. I think I should stick with the default behaviour --delete-after, so the error would occur for sure before rclone starts deleting the original data. The old data is moved to another folder with the help of the --backup-dir command. Great.
Furthermore I created in the batch a check with the find command (Windows) which is searching for a keyword in a textfile called "secret.txt" which I placed in root folder of all my data folders. If find cannot read the expected content from this file the sync process will not be started and I will get informed by mail with the help of the blat tool (Great Tool for command line mail sending).
This all looks very promising. Let's see how it works after a couple of weeks.
the solution is to use forever-forward-incremental backups with a date/time stamp for the backup-dir folder.
here is what my script looks like rclone sync c:\data\u\keepass gdrive-jojo:en07\keepass\rclone\backup\ --backup-dir=gdrive-jojo:en07\keepass\rclone\archive\20200901.175607\
also, you would have to run rclone twice to get the effect you mentioned; that ransomware would overwrite both the sync folder and the backup-dir folder.
this is because of the way rclone moves files to the backup-dir.
rclone decides it needs to copy local file 01.kdbx to the sync folder
rclone notices a 01.kdbx is already that sync folder.
rclone does a server-side move of 01.kdbx from the sync folder to the backup-dir folder
rclone copies the local file 01.kdbx to the sync folder.
here is a log snippet
2020/08/27 11:11:06 INFO : database/01.kdbx: Moved (server side)
2020/08/27 11:11:06 DEBUG : Google drive root 'en07/keepass/rclone/backup': Waiting for transfers to finish
2020/08/27 11:11:07 DEBUG : database/01.kdbx: MD5 = 04bf9b6b05510b2a3bd875dd21a244e8 OK
2020/08/27 11:11:07 INFO : database/01.kdbx: Copied (new)