I gave it a without the vfs-cache and now recall why I am using the cache: Writing to the mount without vfs leads to some files having a size of 0bytes something that doesn’t happen with the vfs-cache.
I guess it would be possible for rclone to change mode depending on the file name.
Or maybe rclone could switch modes if the files was written sequentially above a certain size.
Maybe the vfs cache should have a mode where all the writes go to remote storage first, however if the filesystem does something not supported by that then rclone could download the file into the cache. Might be a bit inefficient…
To be honest, I’m not a big fan of the vfs-cache. I know it has to work around some limitations of FUSE, which makes it seem slow and buggy under some workloads.
For my uploads, I’m using triggered uploads when possible (when taskX is done call rclone move /download/taskX gdrive:upload) or something like mergefs or overlayfs and upload regularly.
Choosing the right settings for vfs-cache is heavily dependent on the programs used and device constraints like disk space.
The last time I used vfs-cache, most extracting tools needed --vfs-cache-mode >= writes, which has the drawback to write the whole file to a local folder before uploading.
If you can use --vfs-cache-mode off without errors, the disk space would not be an issue anymore.
Implementing a cache mode between off and writes is an option, but this combines the weaknesses of both modes into a new one.
A program which is unable to operate on mode off now will probably fail on the new mode also, as the flexibility of mode writes give will vanish at some time while the file is opened and the operation can fail as before.
I don’t see the benefit of implementing a new mode and it would increase the complexity of the VFS layer even more.
We should instead improve the usability for mode off for failing programs if possible.
@neik If you have a specific tool that fails with cache-mode off you can post a debug log of your rclone mount and when possible two strace logs.
One log of the failing tool (strace -f -e file,read,write -o strace_rclone.log unrar x flie.rar) and one log for the same command on a local filesystem (cd /tmp; strace -f -e file,read,write -o strace_local.log unrar x /mnt/rclone/flie.rar).
Maybe we can then find a solution.
The issue occurs with JDownloader while extracting from a local folder to the mount. JD downloads the file and extracts it on its own after download.
Unfortunately, I didn’t know how to use the strace command in combination with JD so I ended up with only a debug log of rclone. The WinSCP.ini file ended up with 0 bytes a couple of times.
Yes, the VFS cache exists to fix the mis-match between what a FUSE filing system needs and what an object storage backend can deliver. (Mostly the ability to do random writes.)
It would be possible to fix that mis-match in different ways, eg by chunking the files. I made a prototype of this some time ago - maybe I should resurrect it. The idea was that you could create files which were held as a directory full of chunks allowing random read write. This loses the 1 file == 1 object mapping though.
I can try to get some logs, but the errors I see tend to be related to Sonarr/Radarr and partial files.
They write to a .partial~ file and once done, they move the partial file to the final file name. I wouldn’t get a failure per se, but I’d get both a copy of the file name and the partial hanging around. If i use the vfs-cache-mode writes, that goes away and the file disappears from the temp area in 1h anyway so no problem for me.