I have a TrueNAS Scale file server with a storage pool offering up to 103 TB of space, 38 TB of which are currently in use. I am currently looking for the cheapest way to keep my data safe in the event of a catastrophic failure. Of all the research I have done so far, it seems like the cheapest way to go would be AWS S3 Deep Archive. My storage pool runs RAIDZ2 (ZFS equivalent of RAID 6) on 8 individual disks. This redundancy seems to be secure enough that I can get away with backing up my data to a cheaper location at the sacrifice of both the cost of restoration and the ease of restoration. From the reading I have done on AWS S3 Deep Archive, I need to keep files stored for at least 180 days to avoid extra fees.
With the above in mind, I am wondering if there is a rclone configuration perhaps working in conjunction with an S3 bucket lifecycle policy that can do the following.
- Upload new files to S3 Deep Archive.
- Upload new versions of files as different files to avoid overwriting files already in the archive.
- Only delete non-current versions of files from the archive once the 180-day period has passed.
I am not worried about individual requests made to S3 as I plan on using the --size-only flag to minimize the number of requests made to S3.
Any insight to this is much appreciated.