Feature: --vfs-cache-max-size behave as --cache-chunk-total-size

First of all I know the scenario I'm about to describe is an edge case but I don't think I'm the only one that could benefit from this proposed change:

I run rclone on Unraid NAS. Unraid has a very awesome feature that is, using /dev/shm you are actually writting to a RAM disk, which is much faster and doesn't wear out like SSDs.

One of the best features in --vfs-cache-mode full is --vfs-read-ahead, specially for very large files (+60GB).

One of the main reasons that I use --vfs-cache-mode full and a cache is because when serving a mount over LAN different devices behave differently and open and close files several times during playback, which requires more API hits to the cloud and give a worst experience.

You can read one example here:
https://forum.rclone.org/t/constant-access-to-the-same-file-over-and-over-again/

There are also the cases when you just want to seek 10 seconds back on a video file and this has to kill the entire buffer and start again if not using any cache.

The problem with the VFS cache is that when --vfs-cache-max-size reaches the limit, during playback (tested using a Windows 10 PC on the same LAN playing a file with VLC) it will remove the entire cache and start over building the cache again, which causes artifacts during playback and stuttering and does exactly what you don't want when using a cache: more API hits, more bandwidth. It will actually consume more data than the video file you are trying to play at the end of playback.

This is where --cache-chunk-total-size behaves differently, as per the documentation it states:

If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.

This is the exact behavior that VFS could benefit, instead of erasing the entire cache it could remove the oldest chunk and keep replacing the cache with new data while keeping the same size.

Of course, this is only needed because of RAM limitations. My server MOBO can only handle 64GB of RAM.
With 256GB or 512GB this is hardly an issue but I don't think many people have this kind of setup.

And why just don't use the cache mount? Because it's slower compared to VFS, even using RAM. It has a fixed number of workers, doesn't have features like doubling the size of chunks etc. I think that cache mount lacks the performance improvements that vfs has and I believe the documentation on cache mount says the same.

If I'm missing something, please, let me know, maybe there's a workaround that I don't see but I believe this is how those options behave and this change would be a welcome one.

They can't behave the same though as they are built very different.

The new VFS mode was built on files and using sparse files.

The deprecated cache mode was built on chunks and not files.

They won't be able to behave the same since they design was much different.

I bought a cheap SSD 1TB (can get these days from $55-75 USD) and just use that and I never look back. You could also probably do a spinning disk for even cheaper / more storage USB wise too. Once that SSD wears out over 4-5 years, I may have to replace it with something I'm sure much larger for even cheaper.

There are a number of requests/enhancements for full mode that have been sitting for some time:

VFS Cache: control caching of files by size · Issue #4110 · rclone/rclone (github.com)

Oh, too bad this can't be done. Thanks for taking the time to reply either way.

I have a 240GB sata SSD but the difference with RAM disk is quite noticeable.
I've ordered a 1TB NVME SSD that should improve things further.

This thread can be closed, it was worth asking.

Comparing memory vs disk speed won’t ever be good as they are an order of magnitude different.

It’s not that it can’t be done but the design choices were for file based and not chunk based.

I think plexdrive still does chunks if you use Google Drive so might be an option.

The goal for you is to delivery all this from memory as any disk access is too slow?

I wouldn't say too slow. I would say it's fast but there's a faster method.

A 32GB server isn't too expensive and having half of that dedicated to a streaming from the cloud is doable while having everything else running without any problem. As to my testing the only problem is limited space because of big media files, the rest works flawlesly.

On my setup the fastest access I can get to a mount point is disabling chunk reading. I'll share my mount script:

rclone mount remote: /mnt/user/mount_rclone/remote \
                                   --log-level DEBUG \
                                   --log-file /mnt/user/rclone/logs/remote/"${now}".txt \
                                   --allow-other \
                                   --no-modtime \
                                   --buffer-size 32M \
                                   --transfers 8 \ // the documentation says this only affects writes but I leave at that.
                                   --vfs-read-chunk-size 0 \
                                   --no-checksum \
                                   --vfs-read-ahead 10G \ // enough to read most content while watching on a gigabit internet.
                                   --vfs-cache-mode full \
                                   --vfs-cache-max-age 24h \
                                   --vfs-cache-max-size 30G \
                                   --cache-dir /mnt/user/rclone_cache/remote/ // this is the ssd cache drive

${now} is just a variable for today's date for the log files. I'm using debug to check if I get any indication of too many API hits.
I'm using a service account to mount as I find it easier compared to creating a client ID.
Also, I'm limiting to 30GB because I have a small 240GB SSD for now, the 1TB NVME is on the way, and I use this same drive for partial downloads so I can't have too much exclusive to rclone.

All this, have a faster access time if executed from ram.
The only change necessary to use ram is to point the --cache-dir to /dev/shm on unraid.

All I need to figure out, in case you already know, is where exacly on google workspace admin panel is the amount of API hits a service account is using. If it's within reason, this mount, paired with a NVME SSD might be as good as a RAM disk but still RAM would be way nicer to use :slight_smile:

Generally, even 4K video is only anywhere from 40-80ish Mb/s and even the slower spinning disks tend to deliver ~75MB(bytes)/s so even a slow disk will keep up well.

The slow point of any of this tends to be getting the file from cloud.

Once it's local, everything is somewhat instant at that point as you'd be comparing milliseconds in disk read vs microseconds pulling from memory and not you could possibly tell the difference.

If resources are unlimited, I'd want memory as well but I can't imagine even noticing a difference in terms of streaming as I click play, things start in 1-2 seconds for a 'fresh' pull from the cloud and I can't say I've ever noticed anything else.

Your settings also make things really slow and very high API to start. There is also some confusion with what vfs-read-ahead does and making it very low cranks up the API usage as well as with read-ahead on 10G, it has to slowly ramp up reading a file.

24 hours gimps a bit of anything you keep in the cache as it'll age out rather than evicting the oldest and unused bits first.

You have no dir-cache time either but not sure if that is an issue since I'm not sure what remote you are using. Generally, if you are using a polling remote, you want this high as well as it speeds up walking the file systems tremendously.

Why no-modtime as Plex gets wonky without that but not sure what you are using to stream?

It's not just sequential reading that we are talking here. There's access time to file parts, when streaming from the cloud (I'm using Google Drive Business) you have the time to write the file, access what has been written and read it. Looking at just sequential reading is missleading and one of the marketing strategies of nvme ssd drives that have horrible 4k random access times and but fast sequential reading times.

Spinning hard drives also have to deal with the time of the disk spining and the magnetic head to find the exact spot of the desired information, that's why SSDs have 0ms access time to any file and any part of the file but I believe you already know that.

--vfs-read-ahead keep reading the file while it's open, isn't it?
So for a 60 to 80GB 4K content, while watching the first 5 minutes of the content you would have 1/5 of the file ready for access in your drive, how that's a bad thing?

The 24 hours age is mostly there just because it's hard for me to watch more than a movie per day.

--dir-cache-time escaped me completly. I believe a good value would be 30 minutes to 1 hour between checks and --poll-interval 15 minutes?

I have 2 ways of streaming content here in my home: emby to the nvidia shield or LG TV, without any transcoding, transcoding disabled, only direct play.
And direct access using a PC or the OPPO BDP that can access and streaming ISOs.
So, basically Emby and SMB.

I'm adding dir-cache-time and pool interval but I can honestly say that disabling chunk reading speed things up drastically.
Maybe a small value of 16MB or 32MB might be more API friendly?

This line obviously debatable if your cache dir is /dev/shm:

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

Test that out as if you head a file , close it. It'll stop reading ahead:

2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): openPending:
2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: vfs cache: checking remote fingerprint "1504953150,2016-12-21 10:00:43 +0000 UTC,b4c95cf116a73804e5edbf37975ffb8759ab4942cf362c2859a60bf6fdac94b4" against cached fingerprint ""
2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: vfs cache: truncate to size=1504953150
2022/10/16 13:32:05 DEBUG : : Added virtual directory entry vAddFile: "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv"
2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): >openPending: err=<nil>
2022/10/16 13:32:05 DEBUG : vfs cache: looking for range={Pos:0 Size:16384} in [] - present false
2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.RangeSeek from -1 to 0 length -1
2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 0 chunkSize 134217728
2022/10/16 13:32:05 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.openRange at 0 length 134217728
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.Read at 4096 length 8192 chunkOffset 0 chunkSize 134217728
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.Read at 12288 length 16384 chunkOffset 0 chunkSize 134217728
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.Read at 28672 length 32768 chunkOffset 0 chunkSize 134217728
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): >_readAt: n=16384, err=<nil>
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: >Read: read=16384, err=<nil>
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: Read: len=32768, offset=16384
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): _readAt: size=32768, off=16384
2022/10/16 13:32:06 DEBUG : vfs cache: looking for range={Pos:16384 Size:32768} in [{Pos:0 Size:28672}] - present false
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: Flush:
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): RWFileHandle.Flush
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: >Flush: err=<nil>
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.Read at 61440 length 65536 chunkOffset 0 chunkSize 134217728
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): >_readAt: n=32768, err=<nil>
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: >Read: read=32768, err=<nil>
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: Release:
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): RWFileHandle.Release
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): close:
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: ChunkedReader.Read at 126976 length 131072 chunkOffset 0 chunkSize 134217728
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: vfs cache: setting modification time to 2016-12-21 10:00:43 +0000 UTC
2022/10/16 13:32:06 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv(0xc000d7c4c0): >close: err=<nil>
2022/10/16 13:32:06 DEBUG : &{jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv (rw)}: >Release: err=<nil>
2022/10/16 13:32:53 DEBUG : Dropbox root '': Checking for changes on remote
2022/10/16 13:32:53 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv not removed, freed 0 bytes
2022/10/16 13:32:53 INFO  : vfs cache: cleaned: objects 1 (was 1) in use 0, to upload 0, uploading 0, total size 252Ki (was 252Ki)

The memory buffer definitely drops once the file is closed.

Majority of folks tend to use no cache so while I use it as your scenario works for me as well, I like having a fallback of the file on disk for quicker access so if you are using Plex as an example and transcoding, it really doesn't matter as Plex already buffers ahead via the transcoding settings.

If you are direct playing, that's really the only bang for the buck by using full mode and keeping it on disk as the local disk read is fine. I keep a large/long cache because Dropbox/Google Drive are both polling remotes so nice large values and I keep the full cache by size.

You can't disable chunk reading as that's how rclone works. You are specifying a tiny first range request that if you sequentially read ahead, it'll grow and double the range requests. Faster lines will do better starting as you are spamming small requests to build up.

So say you have a 4M chun size and you want 1G. You have to make 250 http range requests for that date as with the default you make 9ish.

I've found what was going on here with Unraid so I'll leave it here in case it helps other people:

root@Tower:/tmp# dd if=/dev/zero of=/mnt/user/INCOMPLETE/test1.img bs=3G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 4.36276 s, 492 MB/s

This is copying to the cache but with the user mount.

root@Tower:/tmp# dd if=/dev/zero of=/mnt/cache/INCOMPLETE/test1.img bs=3G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 1.68525 s, 1.3 GB/s

This is copying directly to the cache drive.

Also, with large read ahead when going over 10GB there seems to be a bottleneck.

A few things I've changed here on my mount and this is by far the best I could get using a 1TB NVME M.2 drive:

rclone mount remote: /mnt/cache/mount_rclone/remote
--log-level DEBUG
--log-file /mnt/user/rclone/logs/remote/"${now}".txt
--allow-other
--dir-cache-time 30m
--poll-interval 15m
--buffer-size 0M
--transfers 12
--no-checksum
--vfs-read-ahead 150G
--vfs-read-chunk-size-limit 256M
--vfs-cache-mode full
--vfs-cache-max-age 6h
--vfs-cache-max-size 300G
--cache-dir /mnt/cache/rclone_cache/remote/

As with Usenet downloads my internet speed is really dependent to as many connections as possible.
To max 1Gbit I have to open almost 100 connections within Sabnzbd.
With rlone what I did was set the max chunk size to 256M, when it becomes too big, like 2GB per chunk it gets to 1/10 the speed at 100mbit/s.
Still fast but not 1Gbit.

Also, for some reason on unraid using the buffer size to 0 and writting directly to the nvme drive gives me max speed all the time during playback.

And from my testing it's taking 5 seconds to open a 60GB media file.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.