Vfs cache granularity = Windows "always available offline"

Firstly.. I'm new here, just tried rclone, and I'm excited. Great tool.

I work in the film industry and the company I work for may have us continue working remotely for a very extended period. I've been educating myself on all the cloud has to offer, so I have some answers when we will eventualy discuss cutting out(or minimizing) our on prem I.T.

cloud storage space(aws,azure,wasabi...) is not imediately useful for us because, it's our workstation applications that need to access the many, many, files(frame sequences). Rclone and vfs cache allows us to use cloud storage the way that makes the most sense for us.

But since we are dependant on large file sets, I figure we'll likely dedicate a ssd to the file cache 500GB-1TB. I'm guessing I admit I have not run a test scenario because I do not have a large cloud storage to test with. Remember, this is mostly a theoretical, that I hope to realize soon.

I would like to find a way to pre cache large folders the night before the user needs them (initiated by the end user)
I am also looking to acheive somthing like the windows "always available offline" a cache that does not expire for that same large folder. because, what's the most devastating problem a remote worker all eventualy experience? ..loss of internet. or you know you're going to be traveling, so cach your project files before you go..

so maybe I'm thinking of a vfs-cache-mode sync... but it can't be for everthing on the cloud of course, only select folders.

Anyway, this feature (or the ability to configure it to work this way) would be the key to having a solution to bring to our remote pipeline. I have checked out lucidlink and they don't appear to have as granular control over file caching as you do. Would love to hear your thoughts.

Thank you.

You'd want to use the latest beta with the newly revamped --vfs-cache-mode full feature.

I guess the easiest way of doing this would be to run a little script to get the files into the cache

The best way to do this might be with rclone so you do

rclone cat --discard /path/to/your/mountpoint/path/to/dir

You can use rclone's include and exclude filters here.

This would pull the files into the cache by reading them and discarding the contents.

A more sophisticated way would be to implement a call in the API to say - pull this file into the cache or return if it is up to date. That would be possible too with a bit more work.

Can you say clearer?

What part? Can you elaborate on what you are asking to be clearer.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.