Have you read rclone check
?
check command will perform MD5 or SHA check together with modified time and file size. Check here for what hash is supported by backblaze and S3 - https://rclone.org/overview/ . It does NOT modify src or dst.
Secondly, to perform automated syncing, you can use rclone sync
together with a permutation of the --size-only
, --checksum
, --ignore-checksum
, --ignore-existing
, --ignore-size
, --ignore-times
, --update
, --no-update-modtime
flags. You would need to read up both the forum, github issues and docs while performing some experiments on understanding the exact behavior of these flags.
As for the topic of bit-rot, by guarantee, you are speaking about a monetary and commercial insurance, which is covered in https://aws.amazon.com/s3/sla/ . Backblaze would have an equivalent SLA coverage. However, unless you are have large amounts of data (10, 50 TB, 1PB etc.) with mission critical applications or you have to provide SLAs to your customers, you probably would not need to build policies and systems that are synchronized with the S3 SLA. Even if you were a very serious server admin, you would only rely on the SLA to a certain degree and then build other forms of redundancy systems (e.g. replication different bucket in same region, different bucket-different-region, duplicate to glacial storage etc.). For all cases of personal photos, small business applications etc., you can just double or triple backup like you have done now (S3, Backblaze, GDrive, local NAS). It would be way too complicated to mirror your system to the intricate details of what the standard SLA really mean and it really depends what kind of loss you have and what recourse you want.
S3 and google “may” be better considering they have larger datacenters and more engineers with enterprise class customers who demand custom SLAs. So their infrastructure quality will spill over to normal/standard users. But this still doesn’t mean you should think about who is better: redundancy and holistic thinking is your best line of defence. You may have a higher chance user-based-accidents than machine level issues (e.g. you rm -rf accidentally, which has happened to the best of us).
Hope this helps. You will have to ask more specific questions on the rclone sync issues after you have performed some experiments with the flags and options on some small set of files. You may need to write some python or bash scripts or cron jobs to aid you - rclone alone may not be enough.