WebDAV (TGFS) upload of 700GB file hits 0free disk space on 215GB SSD / 4GB RAM machine

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

I asked a lot to Google gemini, I tried different mount commands. I tried uploading it 4 times. And it failed 4 times.

What is the problem you are having with rclone?

It's eating my whole disk

Run the command 'rclone version' and share the full output of the command.

root@oude-laptop:~# rclone version

rclone v1.61.1

- os/version: debian 13.3 (64 bit)

- os/kernel: 6.12.69+deb13-amd64 (x86_64)

- os/type: linux

- os/arch: amd64

- go/version: go1.19.4

- go/linking: static

- go/tags: none

root@oude-laptop:~#

Which cloud storage system are you using? (eg Google Drive)

Webdav

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount tgfs_crypt: /mnt/telegram \

--vfs-cache-mode writes \

--vfs-cache-max-size 10G \

--vfs-write-back 2s \

--vfs-cache-max-age 1m \

--buffer-size 32M \

--low-level-retries 1000 \

--retries 999 \

--allow-other -v -P

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

root@oude-laptop:~# rclone config dump

{

"tgfs_crypt": {

    "filename_encryption": "obfuscate",

    "password": "redacted",

    "password2": "redacted",

    "remote": "tgfs_raw:/default",

    "type": "crypt"

},

"tgfs_raw": {

    "pass": "redacted",

    "type": "webdav",

    "url": "http://localhost:1900/webdav",

    "user": "vulcanocraft",

    "vendor": "other"

}

}

root@oude-laptop:~#

Hi everyone,

I am struggling to upload a 700GB .7z file to a Telegram-based backend (TGFS). The upload keeps failing because my local system disk hits 0% free space, causing the mount and the SFTP server to crash.

My Stack:

Filezilla (Remote Client) → Tailscale → SFTPGo (SFTP Server) → Rclone Mount → Rclone Crypt → WebDAV (TGFS Backend) → Telegram

Hardware Constraints:

Host: Laptop with a 215GB SSD (Root partition is small).

RAM: Only 4GB DDR3 (Cannot use large RAM-disks/tmpfs).

OS: Debian 13.

The Problem:

Since the file (700GB) is significantly larger than my SSD (215GB), I need a way to "pass-through" the data without filling up the drive. However, when I try --vfs-cache-mode off, Rclone returns:

"NOTICE: Encrypted drive 'tgfs_crypt:': --vfs-cache-mode writes or full is recommended for this remote as it can't stream"

It appears the WebDAV implementation for TGFS requires caching to function. Even when I set --vfs-cache-max-size 10G, the disk eventually hits 0free, likely because chunks aren't being deleted fast enough or the VFS is overhead-heavy for this specific backend.

My current mount command:

rclone mount tgfs_crypt: /mnt/telegram \

--vfs-cache-mode writes \

--vfs-cache-max-size 10G \

--vfs-write-back 2s \

--vfs-cache-max-age 1m \

--buffer-size 32M \

--low-level-retries 1000 \

--retries 999 \

--allow-other -v -P

Questions:

- Is there any way to make Rclone's VFS cache extremely aggressive in deleting chunks the millisecond they are uploaded?

- Can I optimize the WebDAV settings to handle such a large file on a small disk?

- Are there specific flags to prevent the "can't stream" error while keeping the disk footprint near zero?

- Any insights from people running Rclone on low-resource hardware would be greatly appreciated.

You have an ancient version of rclone. You’d want to update it.

To see what’s going on, you want a log file.

You want to include rclone config redacted as well.

first of all, i sent the rclone config and redacted the sensitive info myself.
where can i find the log file?
Will the old version of rclone be the problem?

^^^^^^^^^^^^^

Unsure but I know we aren’t spending time debugging a multiple year old version.

You make it by appending -vv –log-file /some/directory/rclone.log when you are running the command.

root@oude-laptop:~# sudo rclone selfupdate
2026/02/28 12:58:42 NOTICE: Successfully updated rclone from version v1.61.1 to version v1.73.1
root@oude-laptop:~# rclone version
rclone v1.73.1

  • os/version: debian 13.3 (64 bit)
  • os/kernel: 6.12.69+deb13-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.25.7
  • go/linking: static
  • go/tags: none
    root@oude-laptop:~# rclone config redacted
    [tgfs_crypt]
    type = crypt
    remote = tgfs_raw:/default
    filename_encryption = obfuscate
    password = XXX
    password2 = XXX

[tgfs_raw]
type = webdav
url = http://localhost:1900/webdav
vendor = other
user = XXX
pass = XXX

Double check the config for sensitive info before posting publicly

root@oude-laptop:~#

rclone mount tgfs_crypt: /mnt/telegram --vfs-cache-mode writes --vfs-cache-max-size 10G --vfs-write-back 2s --vfs-cache-max-age 1m --buffer-size 32M --low-level-retries 1000 --retries 999 --allow-other --log-file "/media/devmon/New Volume/rclone_logs/rclone.log" -vv

I am currently running this command. I will send the log here when it’s ready

rclone does not remove chunks from the cache.

after the entire file is uploaded successfully, only then will rclone remove the entire file from the vfs file cache.

  1. copy entire file from local to the cache.
  2. upload entire file from cache to cloud.
  3. delete entire flie from cache.

some options

  1. purchase a new drive to replace that tiny internal drive.
  2. purchase a cheap external usb drive and use the cache.
  3. point --cache-dir to a network share, perhaps from a NAS or another machine.

Telegram Premium subscribers can send media and files each up to 4 GB in size
is that corret, the max file size is just 4GB?

is there no way to make it so that rclone doesn’t wait until the whole file is uploaded but that it just uploads 10 GB to the webdav, then deletes the cache, and then uploads the next 10 GB?

also you are correct, people with telegram premium can send files up to 4 GB, telegram free users can send files up to 2 GB (I am a free user). the program which provides the bridge from webdav to telegram (TGFS) chunks files in pieces so that you can upload unlimited large files to telegram. i just want to use rclone for 2 things:

  • crypt, because i want to encrypt my files before uploading, and also decrypt when i download them again
  • sftp, i want to access my files using sftp at the end because sftp is just more convenient for me with filezilla

do you know any solutions for this?

I am broke, that’s why i don’t have external harddrives and use this potato laptop. if i could afford external harddrives that what would be the point to use telegram as unlimited cloud storage?

well could try --magic


where is the file located?


really, overly complicated very fragile. not sure that could every be reliable or trustworthy.
and no ability to use checksums to verify file transfers.

i would find ways to simplify.
for example, when creating the .7z, split the volume into 2GB parts and enable encryption.
then no need to rclone crypt, no need for rclone mount


lol, it would be magic if it would work then.

that file is located on an external hdd which i borrow from a friend

you are right, if you know good ways to simplify that would be awesome.

keep in mind that my goal is to just encrypt and decrypt files from an external webdav server, and then serve it as an sftp server. it can handle big chunks of data because when i try with a 200 GB 7z file, it just works perfectly without any problems.

does someone have an answer?

use that to store the vfs file cache

But I need to return that drive when I am ready. It's not that he just gifted that drive or something. So that's not an option

might try rclone copy 650G.7z tgfs_crypt:, use a debug log and see what happens

The only problem that I need to solve here is the caching. It's the caching system from rclone that's bugged i suspect, because if I set a limit of 100 GB, it just ignored that and does its own thing

did you try that, did it not work?

1 Like

I will start the process tomorrow. Uploading 650 GB with an upload speed of 15 Mbps takes some days

the issue is, if rclone is using local storage as a temp storage using rclone copy
so, no need to upload the entire file. just check the cache dir and and temp dir

rclone config paths

can you give me the exact full command which i need to execute?