First of all I know the scenario I'm about to describe is an edge case but I don't think I'm the only one that could benefit from this proposed change:
I run rclone on Unraid NAS. Unraid has a very awesome feature that is, using /dev/shm
you are actually writting to a RAM disk, which is much faster and doesn't wear out like SSDs.
One of the best features in --vfs-cache-mode full
is --vfs-read-ahead
, specially for very large files (+60GB).
One of the main reasons that I use --vfs-cache-mode full
and a cache is because when serving a mount over LAN different devices behave differently and open and close files several times during playback, which requires more API hits to the cloud and give a worst experience.
You can read one example here:
https://forum.rclone.org/t/constant-access-to-the-same-file-over-and-over-again/
There are also the cases when you just want to seek 10 seconds back on a video file and this has to kill the entire buffer and start again if not using any cache.
The problem with the VFS cache is that when --vfs-cache-max-size
reaches the limit, during playback (tested using a Windows 10 PC on the same LAN playing a file with VLC) it will remove the entire cache and start over building the cache again, which causes artifacts during playback and stuttering and does exactly what you don't want when using a cache: more API hits, more bandwidth. It will actually consume more data than the video file you are trying to play at the end of playback.
This is where --cache-chunk-total-size
behaves differently, as per the documentation it states:
If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.
This is the exact behavior that VFS could benefit, instead of erasing the entire cache it could remove the oldest chunk and keep replacing the cache with new data while keeping the same size.
Of course, this is only needed because of RAM limitations. My server MOBO can only handle 64GB of RAM.
With 256GB or 512GB this is hardly an issue but I don't think many people have this kind of setup.
And why just don't use the cache mount? Because it's slower compared to VFS, even using RAM. It has a fixed number of workers, doesn't have features like doubling the size of chunks etc. I think that cache mount lacks the performance improvements that vfs has and I believe the documentation on cache mount says the same.
If I'm missing something, please, let me know, maybe there's a workaround that I don't see but I believe this is how those options behave and this change would be a welcome one.