If I am understanding the question correctly, then yes. You can rclone serve webdav mycrypt: where mycrypt wraps another webdav remote.
But there are some issues to be aware of if you do not want it to ever sit locally. Notably, you have to use no vfs-cache mode which means that if something fails, you are out of luck. There is no way for rclone to retry the upload. And for reading, there is no buffer happening so if there is a bottleneck with your home webdav, it is going to be painful.
Also note that this is not really “end-to-end” if that is your goal since, even though it is uncached, the keys exist in memory on the server and you decrypt from the (presumed) https layer on WebDAV, to rclone crypt + (presumably) another https layer.
I do something like this but I (a) rclone serve webdav <home> so I access everything (basically, just a simple webdav server) and then I have an rclone mount of my crypt space. I could directly serve it but I want to keep it easier and also have filesystem access when I am SSHed in. And (b) I do have full caching on. This exposes my content possibly but I am already mounting it so it is visible there. I think it is worth the small risk.
If I would use cache, I would max use 10 GB cache. But I want to make sure then that the cache actually max uses 10 GB cache and not even a bit more. I also want to be able to upload an 800 GB file for example. I want to upload that file through WebDAV. And I want it to be encrypted using rclone, and that rclone then moves the encrypted file to the WebDAV remote. Important here is that it doesn't wait to encrypt / upload until the whole file is uploaded locally. It needs to encrypt and upload live while the rest of the file is still uploading in the meantime.
I am not sure I can give you a definitive answer but my understanding, confirmed in this thread by the much more knowledgeable jojothehumanmonkey, is that it is a soft-limit. It limits the files such that there is less than that size but if a single file is larger than the cache (and/or multiple files are open?) it can grow.
Honestly, 800Gb just sounds like a bad idea via WebDAV. You will need to have a rock solid connection including no weak links and be certain you have set proper timeouts as you can quickly break that before it is uploaded.
You may be able to put rclone chunker in front of it though I am not sure if that will solve the problem. I don't recall the retry logic.
The client I am using is Filezilla pro. Most of the time only one client is connected. But sometimes multiple. For more info about the WebDAV server, it's a WebDAV server which is created by a tool called TGFS ( GitHub - TheodoreKrypton/tgfs: Telegram becomes a WebDAV server · GitHub ). I can't really change a lot to that WebDAV server. So it's mostly just a regular WebDAV server which you can't manage, but you just get admin credentials. If it would work to put a rclone chunker in front of the rclone crypt, then that's fine for me
The crypt encrypts data on the fly (chunk-by-chunk).
The S3 backend uses multipart upload for anything above --s3-upload-cutoff (default 200 MiB).
Something like that overall:
local disk → read chunk → encrypt chunk → upload part → repeat
Chunks are still buffered in RAM but memory usage should stay reasonable (few chunks size) even for very large files.
I have never paid attention to no buffering behaviour myself as I prefer to use caches (storage is cheap) so as proof is in the pudding I am curious myself of your tests’ results.
Thx, I will let you know. But first I need to find out how I can convert the WebDAV server to a S3 server. It's not that I don't want to use cache or so. But like max 50 GB or so. And hard capped so that it doesn't go over it. If I want to upload files of 800 GB for example, I am ok with waiting a few days or weeks to let it finish uploading
If you absolutely must not cache that file locally, can manually use dd to either create a smaller slices and then upload them manually or even use something like
dd if=/path/to/file bs=1M status=progress count=x skip=z | rclone rcat remote:file.ext.part1
Then for next part increment z by x and replace file.ext.part1 with file.ext.part2.
Make a human error with x, z or filename and you have a broken file, so this method have its cost, even if it's not in form of temporary disk space
You can get a free 20 GB Mega account to start. This cloud provider has end to end encryption and they also provide a client called megacmd. This client has the capability to share the encrypted files over a webdav server that it creates.
Point everybody at the webdav server created by Mega.
For s3 testing, idrive offers a 10 GB 30-day trial.
Perhaps one of these ideas we will assist you.
Also, there is an app called caddy. I believe it only runs on Windows but it will create an https SSL connection to the files you serve with rclone. I have demonstrated this as proof of concept for another app. Rclone can serve webdav just not encrypted. Caddy handles the encryption.