Any thoughts on the following for local backups? Not particularly useful on cloud solutions as they'll have their version of this already baked in their architecture.
it could be integrated into the compress remote...
as far as i can tell RAR (win only edit: apparently not! May have to think about time messing around with the legacy code vs just buying the solution....) is the only software aside from this that bubble wraps data without creating additional files like PAR2. I really like the fact its got beefy error correction.
Outside of rclone you could quite easily set it up to protect key files, and rclone would just do its thing with the protected files.
would be nice if every copy tool could handle PAR2.
over the years, that has been discussed many times. both, in the forum and in github issues.
you are most welcome to implement or sponsor the development.
well, it depends what you mean by backup?
speaking for myself, i do not use rclone on local.
rclone is not a backup program nor does it pretend to be backup program.
I do not think it would be really useful - beyond being interesting experiment. In case of local backup the most likely corruption is probably unreadable one or more sectors - multiple of 512B or 4KB. It is very different nature of errors than scratched DVD where such codes do amazing job.
It would not be practical to have code protecting from losing such amount of data. Yes data can be interleaved in clever way but it requires creating entirely new filesystem. Otherwise protection can be only partially achieved for large enough files with small files and metadata left behind.
Having 3-2-1 backup strategy is still the best protection against data lose. If data is important it is worth to invest in BTRFS or ZFS based NAS for local storage. They protect you from data corruption and physical disks failure.
thanks. it would only be an additional layer of defence (on top of other good backup practices), it shouldn't be the only being done to protect data. Its a good point about unreadable sectors likely to be the issue (blocks of contiguous errors), rather than a single bit flip, I didn't think about that.
More reading shows that its not always effective, if someone else is thinking of the same thing:
I assume then that perhaps the rsbep code might be less effective; don't know the details of how the headers work on rsbep.
Compared to a local HDD, BTRFS or ZFS local NAS can provide error correction only at the cost of an additional disk (otherwise its only error detection for data on a single disk), additional hardware, additional failure modes, additional maintenance efforts and is still vulnerable to a common cause failure of the NAS itself going down with its disks and the backup with it. It does completely mitigate the common cause failure mode of the PC going down taking primary and local backup.
Personal opinion - NAS is great as a backup if you were using NAS in the first place for something else, the backup capability comes almost for free. I'm not sure about all the tradeoffs if NAS were introduced solely as a backup solution without any other need.
I do think that rsync with local --back-up dir is a (lightweight/partial) backup solution. Just need to complement it with something else to deal with complete PC loss. My strategy is closer to (2.5)-(2.5)-2, trading off cost against complexity in restoration in case of complete PC loss.
BTW. You can use 1 disk. At least with BTFS and ZFS you can store data multiple times (it is default for metadata) bringing correction and self healing to one disk setup.
All erasure coding is fascinating subject used in real life storage solutions like Ceph or HDFS. If somebody brings EC overlay to rclone than for sure it will have some use.