Can rclone act as an encrypted WebDAV proxy (WebDAV → live encrypt/decrypt → WebDAV)?

Hi,

I’m wondering if the following setup is possible with rclone.

I have an existing WebDAV storage backend and I would like to add transparent encryption in front of it, but without storing files locally.

The idea would be something like this:

Client
→ WebDAV server (provided by rclone)
→ live encryption/decryption
→ existing WebDAV server (backend storage)

So effectively:

WebDAV client → rclone → encrypted → remote WebDAV

What I want:

  • rclone exposes a WebDAV server for users

  • files uploaded to this WebDAV server are encrypted on-the-fly

  • the encrypted data is streamed directly to the backend WebDAV server

  • no full files are stored locally (only streaming / minimal buffering)

  • downloads are decrypted on-the-fly

So conceptually something like:

incoming WebDAV
→ encrypt stream
→ upload to remote WebDAV

and the reverse for downloads.

Is this possible using the rclone crypt backend combined with rclone serve webdav?

Or would rclone still need to buffer the whole file locally during uploads?

My use case is using a remote WebDAV storage provider while ensuring that all files are stored encrypted on the remote server.

Thanks!

If I am understanding the question correctly, then yes. You can rclone serve webdav mycrypt: where mycrypt wraps another webdav remote.

But there are some issues to be aware of if you do not want it to ever sit locally. Notably, you have to use no vfs-cache mode which means that if something fails, you are out of luck. There is no way for rclone to retry the upload. And for reading, there is no buffer happening so if there is a bottleneck with your home webdav, it is going to be painful.

Also note that this is not really “end-to-end” if that is your goal since, even though it is uncached, the keys exist in memory on the server and you decrypt from the (presumed) https layer on WebDAV, to rclone crypt + (presumably) another https layer.

I do something like this but I (a) rclone serve webdav <home> so I access everything (basically, just a simple webdav server) and then I have an rclone mount of my crypt space. I could directly serve it but I want to keep it easier and also have filesystem access when I am SSHed in. And (b) I do have full caching on. This exposes my content possibly but I am already mounting it so it is visible there. I think it is worth the small risk.

2 Likes

If I would use cache, I would max use 10 GB cache. But I want to make sure then that the cache actually max uses 10 GB cache and not even a bit more. I also want to be able to upload an 800 GB file for example. I want to upload that file through WebDAV. And I want it to be encrypted using rclone, and that rclone then moves the encrypted file to the WebDAV remote. Important here is that it doesn't wait to encrypt / upload until the whole file is uploaded locally. It needs to encrypt and upload live while the rest of the file is still uploading in the meantime.

If you have the cache, even if you set it at 10gb, it will grow for the file. So it won’t work.

But uploading a single 800gb file via WebDAV is perilous and then way more via your plan. WebDAV has no chunked upload with retry ability.

It could work but I wouldn’t trust it.

Why does the cache keep growing then? Is there a way to prevent that?

I am not sure I can give you a definitive answer but my understanding, confirmed in this thread by the much more knowledgeable jojothehumanmonkey, is that it is a soft-limit. It limits the files such that there is less than that size but if a single file is larger than the cache (and/or multiple files are open?) it can grow.

Honestly, 800Gb just sounds like a bad idea via WebDAV. You will need to have a rock solid connection including no weak links and be certain you have set proper timeouts as you can quickly break that before it is uploaded.

You may be able to put rclone chunker in front of it though I am not sure if that will solve the problem. I don't recall the retry logic.

really not sure about a lot of details about your setup

about the client:

  • is there only one client, many clients?

about the server:

  • can you ssh into it?
  • can you run sftp server on it?
  • can you run rclone, wireguard, tailscale, vpn on that webdav server?

The client I am using is Filezilla pro. Most of the time only one client is connected. But sometimes multiple. For more info about the WebDAV server, it's a WebDAV server which is created by a tool called TGFS ( GitHub - TheodoreKrypton/tgfs: Telegram becomes a WebDAV server · GitHub ). I can't really change a lot to that WebDAV server. So it's mostly just a regular WebDAV server which you can't manage, but you just get admin credentials. If it would work to put a rclone chunker in front of the rclone crypt, then that's fine for me

ok, but not sure you answered any of my questions.

I answered as much of your question as possible. If you have more questions, tell me. But I think that what I answered is sufficient.

Does someone know an answer?

Transparent encryption is exactly what rclone crypt remote provides.

What won’t work is your requirement:

Rclone webdav remote does not support StreamUpload. Which means that whole file has to be buffered locally. See remotes features here:

What you need is rclone crypt setup mounted locally using rclone mount.

Ah, so if I am able to translate my WebDAV server to a S3 server someway, then it will support streamUpload, and then it will work?

This is how it should work with S3 backend:

  • The crypt encrypts data on the fly (chunk-by-chunk).
  • The S3 backend uses multipart upload for anything above --s3-upload-cutoff (default 200 MiB).

Something like that overall:

local disk → read chunk → encrypt chunk → upload part → repeat

Chunks are still buffered in RAM but memory usage should stay reasonable (few chunks size) even for very large files.

I have never paid attention to no buffering behaviour myself as I prefer to use caches (storage is cheap) so as proof is in the pudding I am curious myself of your tests’ results.

Thx, I will let you know. But first I need to find out how I can convert the WebDAV server to a S3 server. It's not that I don't want to use cache or so. But like max 50 GB or so. And hard capped so that it doesn't go over it. If I want to upload files of 800 GB for example, I am ok with waiting a few days or weeks to let it finish uploading

1 Like

Does someone know how I can achieve this?

External tools beside rclone are fine too

If you absolutely must not cache that file locally, can manually use dd to either create a smaller slices and then upload them manually or even use something like

dd if=/path/to/file bs=1M status=progress count=x skip=z | rclone rcat remote:file.ext.part1
Then for next part increment z by x and replace file.ext.part1 with file.ext.part2.

Make a human error with x, z or filename and you have a broken file, so this method have its cost, even if it's not in form of temporary disk space

And what if I want it completely automatic? Where I can just access my files using SFTP? Like a set and forget with an unlimited storage backend.

Hmm,

You can get a free 20 GB Mega account to start. This cloud provider has end to end encryption and they also provide a client called megacmd. This client has the capability to share the encrypted files over a webdav server that it creates.

Point everybody at the webdav server created by Mega.

For s3 testing, idrive offers a 10 GB 30-day trial.

Perhaps one of these ideas we will assist you.

Also, there is an app called caddy. I believe it only runs on Windows but it will create an https SSL connection to the files you serve with rclone. I have demonstrated this as proof of concept for another app. Rclone can serve webdav just not encrypted. Caddy handles the encryption.