What happens if drive fails before/during sync?

As I’m pondering all the what-ifs, I realize one scenario I didn’t consider is what if the drive fails before a sync takes place, or even during?

Or, if for some reason the drive isn’t mounted and a sync begins from the mount point (now an empty directory) to cloud storage…boom, gone.

I suppose it could be said that’s what snapshots are for, or lifecycle settings on the cloud provider. I have seen a post here postulating a check of a JPG to ensure it wasn’t encrypted by ransomware, and I think that’s along these same lines.

Has anyone come up with any good solutions for these on Linux?

  1. use copy automatically. run sync manual periodically to prune deleted files manually.

  2. in your script test for existence of a certain file. if it is not there, abort.

If used for backups you should use the great --backup-dir: https://github.com/ncw/rclone/issues/98 ! Preferably with a new backup-dir generated each run from the current time.

I check for hte existance of a file and exclude that file from sync to prevent that from happening before a transfer. If it happens in the middle, Im unclear what would happen but that is a risk. On Google I turn on the --drive-use-trash to help incase.

Please check the --backup-dir options.
For Windows use --backup-dir=remote::/somewhere/%datetime% from
for /f %%i in (‘c:\cygwin64\bin\date.exe +"%%Y%%m%%d%%H%%M%%S"’) do set datetime=%%i

For linux of course something like --backup-dir=remote:/somewhere/date "+%Y-%m-%d-%H-%M-%S"

Like that rclone becomes a fully fledged incremental backup program. It protects against partial or total removals/corruption/ransomware

I should look at backup-dir instead of trash. Thanks

I initially wanted to use backup-dir also, but noticed that one of the two clouds I use, B2, doesn’t support move or copy. Boo. So that means rclone would have to re-upload anything it wanted to “delete”.

Sounds awesome, but B2 doesn’t do copy or move. Maybe that’s why they’re so cheap, but this doesn’t seem complicated for them to implement.

Thing is you don’t really need it with B2 since they version if you want to. Just delete and keep old versions.

B2 also charges by the GB so probably it wouldn’t be a good idea to keep all the versions from each sync. Also if you just rename or move around files/folders they will be re-uploaded (as they would in case of copy or sync anyway) and their copies exiting on the remote will be moved to the backup-dir of that sync (instead of trash). This method is really good when you have lots of disk space (or “unlimited”…) and some decent broadband.

I was loving this with rsync as well, it can scale as much as your filesystem would scale, it doesn’t take too much (CPU/RAM) resources on either side, you have always a folder which is 1:1 with the original (therefore easy to check by any method from total size to checksums), it is clear what was changed each time, it is easy (rm -rf backpsomewhere/2016-*) to prune old versions, etc.
I’m really gutted that rclone doesn’t work with ACD anymore, it was all so good…

I’m ok with more stuff stored on B2 since I’m paying under $1/month as it is. What I don’t like is how deleted items would have to be re-uploaded. For a few hundred MB this is Ok, but if it gets into GB that kind of sucks.

I may just stick with my 30 day lifecycle expiration. The question is how to retrieve large numbers of those; I haven’t looked into that yet.

Your can use rclone to pull down versions. Is all suggest periodically creating a snapshot in the GUI.

I was considering snapshots. Quick, cheap insurance.

How are versions pulled down? I haven’t experimented with that yet; are they visible with “rclone ls”?

snapshots become their own ‘bucket’ within rclone but in the GUI they appear as a snapshot under an existing bucket. So an lsl on the remote: will show a new bucket and in that bucket is a zip file with all your data from that point in time.

I wish they were a virtual directory of a point in time snapshot. I’ve only just begun looking at them TBH. Im going to guess they they increase your storage usage by the size of the zip file but I do not know that for sure yet. I’ll let you know when my full snapshot finishes getting created. :slight_smile:

Meh. they do come out of your storage costs effectively doubling it. Eh. I’ll stick with individual file versions.

I guess by file versions I was referring to the lifecycle settings in the GUI. So if I delete or change a file locally, sync it, then the remote will show the previous version because I have it set to keep all previous versions for 30 days.

Those work okay. If you enable the version listings in the rclone command like you’ll see them like any other file postfixed with a date/time stamp.