Unkown filesize - rclone cache/tmp issues with very large files?

What is the problem you are having with rclone?

I want to download a large file (100 GB) directly to the mounted webdav storage, without caching and without saving to /tmp/, because the file is too big for the server's internal storage capacity. Unfortunately when downloading the file, the filesize is unkown, so wget just keeps downloading until its finished.

I think the unkown filesize causes issues with rclone's cache and tmp.
rclone keeps filling /tmp/ with data. Eventually it crashes, as the server's internal storage is full.

Server Internal Storage: 20 GB
external WebDav Storage (/mnt/webdav): 5 TB
File-Size: 100 GB

What is your rclone version (output from rclone version)

rclone v1.56.0 (latest)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04 64bit

Which cloud storage system are you using? (eg Google Drive)

WebDAV Server

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --buffer-size 64M --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 128M webdav:/data /mnt/webdav --allow-other --cache-tmp-upload-path /mnt/webdav
cd /mnt/webdav
# download file.zip with unkown filesize. Linking directly to file.zip does not work.
wget "https://www.domain.com/path/to/file"

hi,

--cache-tmp-upload-path
as of v1.56.0, the cache remote has been deprecated, as documented at
https://forum.rclone.org/t/rclone-1-56-0-release/25446#v1560-2021-07-20-1

and the config file and debug log are missing?

Using watch df -h i can see how /dev/sda1 is filling up and /webdav nothing happens.

Edit:
Same results with downloading files with known filesize. rclone keeps filling data to /tmp/rclone-spool...

perhaps, remove all the cache/vfs flags from your command

or move the cache to the drive with enough free space.

The reason we always ask for a debug log as the answer is almost always there.

With Webdav, it's not a streaming remote so it makes a local copy.

2021/07/21 11:03:35 DEBUG : webdav root '': Target remote doesn't support streaming uploads, creating temporary local FS to spool file
2021/07/21 11:03:35 DEBUG : Creating backend with remote "/tmp/rclone-spool640945635"

@Animosity022 True, webdav cant stream, but I thought rclone downloads chunks of the file and uploads it in chunks. First 100M, next 100M and so on.

So, I also tried:

rclone copyurl "http://domain.tld/path/to/file" webdav:/data --progress

  • For known filesize: it works.

  • :tired_face: For unkown filesize: it creates and fills up /tmp/rclone-spool7876876/

I use watch df -h to check if something is written to local filesystem, /dev/sda1.

If you post the mount debug log, you'll see the answer...

Unfortunately webdav needs to know the size of the file at the start of the upload.

If the source isn't telling rclone the size, then rclone doesn't have a choice but to store it to disk in a temporary file.

You could possibly work around this with a bit of scripting...

Use rclone cat | wc to find the size of the file then use rclone cat | rclone rcat --size XXX where the size is what you measured first. This will download the file twice, but it won't store it to disk.

You'll need 1.56.0 for rclone rcat --size.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.