This posts exists to outline my current backup situation in case anyone wants to adapt it to their own needs, and it highlights the rclone features that made such a solution possible.
So, I like zpaq (Well, zpaqfranz now) for my backups. It serves my purposes very well, and I trust it quite a bit (especially since I generate .par2 files that go along with each archive version. I use rclone to upload the files, but it can't resume uploads, and the first archive version was over 200Gb. Given my Internet, that's about a four day upload, and it kept failing for various reasons. This is the solution I came up with:
Inside my local backup directory, I have two subdirectories named "e" and "m". I use the "chunker" backend layered on top of the "crypt" backend layered on top of the "local" backend pointed at the "e" directory, which will end up containing the (encrypted) chunks. (I also have the chunker set to sha1all for paranoia's sake). I absolutely love how I can create a stack that does exactly what I need for a particular situation like this.
The backup script uses "rclone mount" with vfs-cache-mode set to "writes" to mount that chunker onto the directory "m". Two subdirectories have been created inside: "d" and "i".
zpaq is then run with an external index in "m/i" and the actual zpaq archive in "m/d" using a wildcard in the name so that it generates a new file for each version rather than altering any of the existing archives. These separate directories are useful for exploiting an rclone feature later.
After this, the script removes all par2 files from "m/i" (because they are invalid for the updated index) before walking through "m" generating par2 files for any zpaq file that does not already have them.
The script then calls for a sync and waits 60 seconds to let the backends finish before unmounting "m". (FEATURE REQUEST: It would be better here if I could just pass rclone an argument that tells rclone mount to finish all operations waiting in the cache before actually unmounting.)
At this point, I have my backup files chunked into reasonable sizes and encrypted in "e". The script then uses rclone copy to put the zpaq archives (formerly "m/d/") into (a similarly encrypted location in) the remote storage before using rclone sync to copy the index (formerly "m/i/") into the remote storage.
The cool thing about the two-stage upload is that I can delete some or all of the local data files anytime I start running low on space. So long as the detached index is there, I can still add new versions whenever I want and continue to benefit from the global deduplication. Since the new versions upload through copy, it won't propagate the deletion, so I keep all the data remotely. Since the index uploads through sync, it will replace old versions of the index and get rid of the now-invalid par2 files for the old version.
The ideal addition would be using a storage system where you can make files temporarily read-only when they are uploaded, so you could block an active attacker from deleting your backups, but this is adequate for my purposes, and I wouldn't be too surprised if it was enough for many others, as well.
My biggest concern, of course, is 60 seconds not being long enough for the backends to finish. In my testing, it has been more than enough every time, but the machine my (Rockstor) NAS runs on isn't exactly old, so I can't say that applies to everyone.
Admittedly, this is quite a bit of indirection just to get partial uploads working (The local chunker would not be needed if partial uploads were resumed, but even using the chunker on the remote backend does not make partial uploads work. It will simply try uploading the file again from the first part...), but I'm honestly pretty impressed with how well it works as a workaround. I get a local copy I can trim as needed and a stable remote version that can upload even when Internet (or power, unfortunately) is unreliable.
This is almost certainly not optimal, and I would welcome constructive criticism, but I searched for a good while on this forum for ways to resume the upload of a very large file, and I never found it. When I found a workaround that consistently functioned (testing backups is important), I figured I should share.