Block-level file sync or chunking with crypt backend

Does the crypt backend have support for chunking files?
All the affordable (and some expensive) cloud backup solution do not offer block-level chunking of files. If you change 1 letter in a 200 GB text file (I know ridiculous example) it must upload the entire file again.

I want to back up close to 8 TB of data (and growing), so s3 is NOT cheap. Also, every affordable solution has ways to keep it from being useful. Backblaze unlimited caps you upload SERIOUSLY after a couple terabytes, and other providers only work with their own poorly performing and feature locking tools.

If the crypt backend does not chunnk large files for easier syncing, what would you guys suggest as a good solution? Are there any plans to implement chunking? Is there an rclone compatible provider I missed that doesn't cost $400 a year or more?

Thank all.

there are many s3 good providers, each with a different take on the s3 concept.
and that rclone can work will most/all of them, gives a lot of freedom to control the performance/pricing.
in my case,
--- veeam to create block based snapshots.
--- rclone copy --immutable to upload those snapshots.

recent backups - wasabi - no charge for api calls, downloads.
older backups - aw3 s3 deep glacier, $0.99/TB/month.

as for encryption, for backups, i do not use the rclone crypt.
tho rclone crypt is rock stable and i do use it for streaming,
we use aws SSE-C file encryption, which also works on wasabi

I also was not too sure about the pricing on s3 glacier. How do I find out much would it cost in the first month to upload 8 TB? I find the storage pricing pretty darn good. It is their other fees that worry me.

Lots to unpack here.

No but you can use the fixed-sized chunker. But even that doesn't deduplicate. What you really want is a tool like restic (which can interface with rclone) or Kopia or Duplicacy or the like

I have no affiliation with Backblaze but their executives have stated many, many, many times in many ways that there are no caps! Period!

Whether their tools and processes can handle that well, let alone restore, may be a different question, but unless representatives (of a public company and in officially filings) are lying, it isn't a cap.

Most cloud providers offer dumb storage. That is it. It is the client-side that does chunking, etc.

I think you greatly underestimate the cost (and benefits) of cloud storage. B2 at $5/Tb/month is probably the closest to reasonable cost all things considered.

Honestly, have you considered just buying an external hard drive?

1 Like

8x$0.99=$7.92, let's call it $8.00

1 Like

I will look into this. I assume it is in the docs? Actually, there are only a few things that are large files and updated often. My encrypted time-machine backups from two macs, and a large encrypted drive image. Both of which do not need deduplication.

Point taken. It is their tool.

I was imprecise. I mean that most of them do not offer the functionality in their API, which has been stated as one of the reasons rclone does not support block-level file syncing.

Yes, actually I intend to do so, but I would like cloud backup as well just in case.

Come to think of it, the bulk of my 8TB other than that mentioned above is media that will not change or be needed unless something drastic happens. I will have to work on segmenting my files and organizing them better, so that I can spread them across multiple services to save money.

Thank you for your response. I have some thinking to do.

It sounds like what you want is restic. This will allow you to do a backup (via rclone if you wish, but it supports s3 directly). It does encryption and block deduplication so you can change that 1 byte in the 200GB file and it will just upload one new block (a few MB).

I use restic with the rclone backend for backing up my laptop. I use rclone alone for backing up media (pictures, videos etc).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.