How to download large files, that exceed server's disk storage?

What is the problem you are having with rclone?

My server has only 20 GB harddrive storage, but my WebDAV storage capacity is 5 TB. So, I mounted it with rclone to /mnt/externalStorage/.

How do I configure rclone correctly, so I can download large files (100 GB each) directly to the /mnt/externalStorage drive? Whenever I attempt to download a large file, it stops and fails since HDD space on server is limited to 20GB.

It must have to do with the caching I believe. But I don't know which settings I should set, correctly.

What is your rclone version (output from rclone version)

rclone 1.55.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu Server 21.04 LTS 64bit

Which cloud storage system are you using? (eg Google Drive)

Private WebDAV server (5 TB)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount externalStorage:/ /mnt/externalStorage/ --allow-other --vfs-cache-mode full --daemon

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

hello and welcome to the forum,

there are a few workarounds

  • use rclone copy to download direct to the external storage, no cache is needed.
  • if you must use rclone mount, try to not use the cache, by removing -vfs-cache-mode full
  • if you find that the cache is needed for the rclone mount, than move the cache to that external storage using something like --cache-dir=/mnt/externalStorage/cache

The external webdav storage needs to be mounted, because I use a download manager to get the files and save them directly to /mnt/externalStorage.

The problem is, that the downloaded file exceeds the server's internal disk storage capacity. The command rclone copy, is for copying local files to remote afaik. And using --cache-dir=/mnt/externalStorage gives me error for non empty storage and frankly. i believe its not the appropriate flag.

In the documentation I read about the cache and chuncker. But I dont grasp how to correctly use it. If someone can drop me a line or shortly explain what I'm overseeing, I highly appriciate it.

cache remote has been phased out, never left beta status and has known bugs that will never get fixed.
chunker remote will not help in this situation.

i suggested
--cache-dir=/mnt/externalStorage/cache
not
--cache-dir=/mnt/externalStorage

not correct, as with most rclone commands.

  • local to remote
  • remote to local
  • remote to remote
  • local to local

why do you think that?
if you want to use rclone mount with the vfs cache and your internal disk does not have space for the cache, then you must move the cache to wherever there is free space.

I've done this before from B2 using mount to read a giant zip file. The key is you cannot use any cache. I don't know the details of the limitations and whether they are platform/FS specific, but I was able to read and extract from a 50 gig zip file with 15 gigs of free space left.

yes, i gave that as an option

This does not work. I use JDownloader2 to download from direct http website, unfortunately these files are downloaded with unknown filesize.

rclone mount externalStorage:/ /mnt/externalStorage/ --allow-other --daemon

It downloads and saves to /mnt/externalStorage. And I verify with watch ls -lh that the filesize of the files.part increases. However, after about the size of the remaining free-local disk storage, it apruptly stops. But on /mnt/externalStorage there would be plenty of TB left of available storage, only server local disk is tiny 20 GB. And the 2 files I try to download are 50 and 180 GB.

Are there dedicated flags for rclone mount command, for transfering huge files, that drastically exceed the local disk storage? Like an option, to download 100 MB parts of the 50 GB file, store it in cache, upload to remote, and then continue with the next 100 MB of the 50 GB file?? So the cache is limited to 100 MB, thus the local disk can handle the size??

I'm sorry, but I'm new to this. I read through the documentation and still try to figure out the terms "chunk", "parts", "cache", ...

Server:
HDD: 20 GB
Ram: 1 GB

Private WebDAV:
/mnt/externalStorage
HDD: 5 TB

Files to download:
1x 50 GB
2x 180 GB
10x 90 GB

why not just use jdownloader2 to download to /mnt/externalStorage/
that software must be able to run a batch file after the download completes.
so run a batch file using rclone move to move the file to externalStorage:

that is what i do with torrents.
i download the torrent using torrent software to local storage.
after download is complete, the torrent software runs a script using rclone move

JDownloader2 downloads to /mnt/externalStorage !!!!
But it stops the download approximately when the local disk is full. Which makes no sense, as I was thinking it downloads directly without storing anything locally!!!

I think that JDownloader2 has problems with the no-cache /mnt/externalStorage directory, as rclone recommends to use a cache. So my question is, how do you mount it in a stable way, and still be able to download files that are much bigger than your local disk storage ?????????

what i meant was not to use rclone mount.
have jdownloader download direct to /mnt/externalStorage
then use rclone move to upload the file from /mnt/externalStorage to the webdav server.