Confusion about buffer/cache arguments (vfs mount)

Hi

i am currently running an encrypted gdrive mount on a seedbox. The mount gets fed with movies and tv shows via simple move commands and read by plex for streaming.
My goal is to make the best of my mount command for my purpose. For that i am fairly confused by all the buffer and cache arguments. Most importantly, i cannot distinguish between read and write arguments. What argument is what for?

First of all, here is my mount:
rclone mount gdrive_media_vfs: ~/gdrive &
–rc
–allow-other
–allow-non-empty
–buffer-size 512M
–dir-cache-time 72h
–drive-chunk-size 32M
–fast-list
–log-level INFO
–log-file ~/.config/rclone/logs/rclone.log
–vfs-read-chunk-size 128M
–vfs-read-chunk-size-limit off

It works fine for me as a single user, but i haven’t had the ability to try multiple users and streams.
Ideally, i would like to have the following:
Read: generous preload/caching of big files when requested by an app.
Write: reducing bottlenecking by limiting to a number of squential copies rather than parallel.

For read, i think i got most of it right with using a generous buffer size and read-chunk-size. I get single read speeds of anything between 20MB/sec to 80MB/sec.
For write, i am not sure what the best approach is since i read about the vfs write cache. Writing speed varys drastically between 5MB/sec and 50MB/sec.

So my first problem is the highly fluctuating read and write speeds. This just seems off. Is there anything i can change to make this more realiable. I am especially concerned about read speeds when i use multiple clients. On the other hand, i really don’t know if my speedtests represent a global speed limit or if this is connectionbased. Because if another streamer also gets 20MB/sec-80MB/sec, i am just fine…

Second problem is more of a question:
I really don’t get the vfs caching, so let me make a proposal instead. As long as my read speeds, latency and availability (from the actual gdrive and a potential cache) is ok, i would really like to try to cache relatively large amounts of data locally. I think vfs caching is able to do this but i am not sure if makes any sense.
So i had thought that i would let it cache up to 100GB locally for up to a week or so, then delete it (limited by time and disc space). This would enable users to have a better experience for recently added content which is most of the content seen anyway (weekly shows, or even movie requests).
Does this make any sense?
More important question: What happens to sonarr and radarr and plex when files are cached with vfs for a long time. Will those applications find these files under their usual path (like /Breaking Bad/Season1)? I mean i think it would work but i cannot really get behind the technicalities of it.

Thank you all

1 Like

Let me see what I can answer as you are a little all over the place :slight_smile:

The default VFS backend does no caching of anything locally other than in memory.

Are you using your own API key or the default?

–buffer-size defines how much memory is buffered ahead when a file is opened. Setting this to something larger gives an ample read ahead buffer assuming the application/player doesn’t close out the file and re-open it. If the file is closed, the buffer is lost.

–dir-cache time is how long the file/directory structure is kept in memory. Longer really the better imo as it reduces API hits. If a new file comes in, the polling interval will invalidate the cache and it’ll reget the directory/files that is being asked for.

If the goal is to keep items for a period of time, the cache backend does this by keeping chunks on local disk. I find it to be a little slower in general as my use case doesn’t really work for that as people usually aren’t watching the same things. You configure the cache backend with a size locally and it stores it there so it doesn’t go back out.

https://rclone.org/cache/

As for writing, I don’t find writing to the cloud with Sonarr/Radarr to be very clean so I simply do not do it. I use mergerfs and keep a local staging area and combine that with my rclone mount and always write to the local storage first. I run upload script each night to move to the cloud.

With that setup, I keep Plex/Sonarr/Radarr all pointing to the same path so from their perspective, nothing changes.

1 Like

Thanks for reaching out to me!

I am not using API keys as i was not able to get it to work. But i have had no problems with API limits so far.

Buffer size and dir-cache seem to be ok, if i understand that correctly.

In regards to rclone.org/cache, i am confused. I thought vfs is more or less replacing cache for these exact plex matters. I thought it worked better with vfs as its meant to be used for that.

Haven’t had trouble writing so far, yet on the other hand no opportunity to use mergerfs. (non root)

You really need to use your own API key as you are going to see many retries/errors as the rclone default key is much overused.

The default vfs and cache backend are two different things. Why do you think one would replace the other? It depends on your use case.

Oke i will try to get the custom API working. So far i am doing pretty solid.

What would the sequence be when using vfs and cache? gdrive -> cache -> crypt?
Its just, i have often read on the internet that using cache with vfs is useless. I don’t know why though

VFS and Cache are 2 different backends.

You either use one or the other.

You’d use cache if you use case was frequently hit same files as they would be kept locally on disk in the cache area.

If that really isn’t your use case, you’d probably just want to use the VFS backend.

The link I posted above has all the information on how to configure cache:

https://rclone.org/cache/

The order is listed here -> https://rclone.org/cache/#risk-of-throttling