Best configuration for AWS S3 Deep Archive backups

I have a TrueNAS Scale file server with a storage pool offering up to 103 TB of space, 38 TB of which are currently in use. I am currently looking for the cheapest way to keep my data safe in the event of a catastrophic failure. Of all the research I have done so far, it seems like the cheapest way to go would be AWS S3 Deep Archive. My storage pool runs RAIDZ2 (ZFS equivalent of RAID 6) on 8 individual disks. This redundancy seems to be secure enough that I can get away with backing up my data to a cheaper location at the sacrifice of both the cost of restoration and the ease of restoration. From the reading I have done on AWS S3 Deep Archive, I need to keep files stored for at least 180 days to avoid extra fees.

With the above in mind, I am wondering if there is a rclone configuration perhaps working in conjunction with an S3 bucket lifecycle policy that can do the following.

  1. Upload new files to S3 Deep Archive.
  2. Upload new versions of files as different files to avoid overwriting files already in the archive.
  3. Only delete non-current versions of files from the archive once the 180-day period has passed.

I am not worried about individual requests made to S3 as I plan on using the --size-only flag to minimize the number of requests made to S3.

Any insight to this is much appreciated.

hello and welcome to the forum,

  1. that should be a simple rclone copy, perhaps with --immutable
  2. aws supports versioning.
  3. lifecycle should be able to do that.

Do you know what kind of lifecycle configuration I would need to accomplish my purposes? If I understand correctly, rclone copy with --immutable will not touch anything already existing in the cloud. If I create a lifecycle rule that marks every file as non-current after x amount of days, is rclone able to mark those files as current again, if they still exist locally?

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

So, I understand S3 lifecycle configurations but I don’t want a lifecycle configuration to expire objects if those objects still exist locally. Is this at all possible? Does rclone make sure that S3 is aware of the current version?

s3 is always aware of whatever files are stored inside it.

aws s3 knows nothing about your local file system.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.