VFS: File bigger than free space

Hi guys,

I am using the vfs-cache-mode write and am wondering what happens if the file to be copied to the rclone mount is bigger than the free space left.

Example:
Let’s assume I am extracting a file that has 10gb the free space on the VPS on the other hand is only 8gb.

Would rclone chunk the file and upload it in chunks or would it simply fail?

I’m like 99% sure it would fail on a full file system.

Yes you are correct, it would fail :frowning:

If you use --vfs-cache-mode off it will be streamed as you desire.

Would it be possible to add this to the VFS layer? For those that are limited storage-wise this would be a bless.

Nevertheless, when I find a couple of minutes I will remove the vfs-cache flag and give it a try without it.

I think that would defeat the purpose of the cache. You need the full file for writes as I think that’s the use for the setting.

Okay, I didn’t know that.

I gave it a without the vfs-cache and now recall why I am using the cache: Writing to the mount without vfs leads to some files having a size of 0bytes something that doesn’t happen with the vfs-cache.

Some ideas…

I guess it would be possible for rclone to change mode depending on the file name.

Or maybe rclone could switch modes if the files was written sequentially above a certain size.

Maybe the vfs cache should have a mode where all the writes go to remote storage first, however if the filesystem does something not supported by that then rclone could download the file into the cache. Might be a bit inefficient…

Maybe Fabian (@B4dM4n) can spread his thoughts in this as well.
Let’s see what he says…

To be honest, I’m not a big fan of the vfs-cache. I know it has to work around some limitations of FUSE, which makes it seem slow and buggy under some workloads.
For my uploads, I’m using triggered uploads when possible (when taskX is done call rclone move /download/taskX gdrive:upload) or something like mergefs or overlayfs and upload regularly.

Choosing the right settings for vfs-cache is heavily dependent on the programs used and device constraints like disk space.

The last time I used vfs-cache, most extracting tools needed --vfs-cache-mode >= writes, which has the drawback to write the whole file to a local folder before uploading.
If you can use --vfs-cache-mode off without errors, the disk space would not be an issue anymore.

Implementing a cache mode between off and writes is an option, but this combines the weaknesses of both modes into a new one.
A program which is unable to operate on mode off now will probably fail on the new mode also, as the flexibility of mode writes give will vanish at some time while the file is opened and the operation can fail as before.

I don’t see the benefit of implementing a new mode and it would increase the complexity of the VFS layer even more.
We should instead improve the usability for mode off for failing programs if possible.

@neik If you have a specific tool that fails with cache-mode off you can post a debug log of your rclone mount and when possible two strace logs.
One log of the failing tool (strace -f -e file,read,write -o strace_rclone.log unrar x flie.rar) and one log for the same command on a local filesystem (cd /tmp; strace -f -e file,read,write -o strace_local.log unrar x /mnt/rclone/flie.rar).
Maybe we can then find a solution.

The issue occurs with JDownloader while extracting from a local folder to the mount. JD downloads the file and extracts it on its own after download.

Unfortunately, I didn’t know how to use the strace command in combination with JD so I ended up with only a debug log of rclone. The WinSCP.ini file ended up with 0 bytes a couple of times.

Rclone_Log: https://1drv.ms/u/s!AoPn9ceb766mgaIAEWB67OJI-WheVg

Extracting the same file multiple times from the same local folder to the mount with your strace command worked flawlessly.

Rclone_log: https://1drv.ms/u/s!AoPn9ceb766mgaF_QLyq3Mkc34Y3lQ
Strace_log: https://1drv.ms/u/s!AoPn9ceb766mgaIBWu2tRrjxN213uQ

I find it odd that it only happened with JD…

Do the logs tell you anything?

Yes, the VFS cache exists to fix the mis-match between what a FUSE filing system needs and what an object storage backend can deliver. (Mostly the ability to do random writes.)

It would be possible to fix that mis-match in different ways, eg by chunking the files. I made a prototype of this some time ago - maybe I should resurrect it. The idea was that you could create files which were held as a directory full of chunks allowing random read write. This loses the 1 file == 1 object mapping though.

If we can then we should do this definitely.

I can try to get some logs, but the errors I see tend to be related to Sonarr/Radarr and partial files.

They write to a .partial~ file and once done, they move the partial file to the final file name. I wouldn’t get a failure per se, but I’d get both a copy of the file name and the partial hanging around. If i use the vfs-cache-mode writes, that goes away and the file disappears from the temp area in 1h anyway so no problem for me.

Logs never hurt, I guess! :slight_smile:

If I can contribute some more logs as well, just let me know guys.