How to mount WebDAV in macOS and customize how and where files get cached locally

I have read through the docs, and feel a bit lost with the deluge of options available. I tried googling for some guides but they are either trying to do more advanced setups, or they are so basic that they don't answer my question.

I am wanting to mount shares from my NAS using webDAV. I have different shares for different purposes so I want to set them up as separate mounts in macOS. But what I am trying to accomplish seems like it would be possible...

  1. Either mount the share to a local directory, or have it show up as a disk in macOS. Preferably both as having it show as a Disk is really convenient but some programs complain about "network drives", so to have it mounted as a local folder bypasses that issue.
  2. Have real time, or as close to real time as possible file updates. This has been a massive issue with many other programs I have tried.
  3. Be able to have some shares that will automatically cache any files opened. I'd like to specify where the files get cached on my local system, for how long (preferably reset time each time the file is opened), and a maximum cache size with the ability to automatically un-cache old files if the cache gets full.
  4. Be able to manually set files or folders to be cached.
  5. If the file on the server gets changed, then the same changes should also update the file cached locally.

I do a lot of Photography and I store everything on my NAS, nothing other than the programs reside on my MacBook Pro. I want to be able to setup 2 shares for this:

  1. My catalog files, this is basically the database for the program. I would want the files on the server to update as close to immediately as possible, but I'd also like to keep it cached locally with a life time for the cache of around 90days. This way when I am actively working on something the catalog file is cached locally, but my NAS is kept up to date with all changes as they happen. But if I go more than 3mo without opening the catalog then it will be un-cached to save space, and if I open it again later it will re-cache locally for another 90days.

  2. The actual RAW files. This directory is massive, so what I would like to happen with this share is the same as above except I want to set the life time of the cached files to 14days, and a maximum cache size of 500GB. If possible it would be great if I could set an option to not cache any files older than X date even if they are opened. This way I can access my entire library without having to do anything other than open the program like normal. My latest shoots will always be cached locally, with any changes sent back to the server, and I could still access my old shoots the same way just not cached so they'll load a bit slower.

There are several other things I am hoping to accomplish, but this seemed the most straightforward to start with.

I have tried and paid for numerous different programs to try and accomplish this. In order to achieve a somewhat decent workflow I am currently running WebDrive and MountainDuck depending on what the purpose for the mounted share is. Both programs and lacking in many ways, and both programs have stability issues. I am really hoping RClone is the answer.

hello and welcome to the forum,

are you planning to run rclone on the nas or the mac?

most nas units can run a webdav server, without the need for additional software.
tho rclone can act as a webdav server, rclone serve webdav remote:

Im wanting to run it on my Mac. The NAS already has a webdav server setup.

Edit: Sorry it took me so long to reply. I was trying but I kept getting "Error 403" when trying to submit my reply, and I couldnt log out. Had to reset my browser cache.

To clarify my post a little bit, since I got so long winded.

I have a webdav server already running on my NAS. I am wanting to use rclone as a webdav client on my Mac. But I am wanting to be able to control how files get cached locally.

Are you planning to run an rclone mount pointing at your WebDAV server? If so the VFS caching options are documented on the rclone mount page: rclone mount

I just read over the entire page for the 2nd time. It's an overwhelming amount of options, many of which I think I understand the use for, but I'm not 100% sure. I'm worried I'm going to test it out, think its working, and then screw up a bunch of my data not realizing I misunderstood something.

I have been googling for examples of a similar use case to mine and have found nothing.

I agree with you there!

What we normally recommend is people start with no options, and only add them for specific reasons.

You will want --vfs-cache-mode full to enable local caching.

You then want to think about these options for controlling the size of the cache

  --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
  --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)

and use this option to say where you want the cache. It will go somewhere sensible without it so you don't have to use it.

 --cache-dir string   Directory rclone will use for caching (default "/home/USER/.cache/rclone")

To control the time that objects can be stale you want to use

  --dir-cache-time duration                Time to cache directory entries for (default 5m0s)

So by default directory listings can be 5 minutes old. Since this is coming from a local server then setting the time low (to 5s say) won't be a problem.

I think that covers most of your requirements in your original post.

You are awesome, thank you! It seems I was mostly on the right track.

I do have a few questions about the cache.

  1. if I keep some files cached for a long time, like several weeks, will changes made to those files locally sync back to the server automatically or do I need to run a command? And what happens if the remote file changes? Will my locally cached version be updated? And on the off chance a conflict arises what happens?

  2. Is there a command I can use to manually cache a file or directory for a specific amount of time?

Sorry if these questions seem noobish.

Yes, they will be synced back to the server after they are closed after the vfs writeback delay (normally 5s).

Rclone will check the fingerprint of the file and fetch the new one as necessary.

It will be updated when it is used, yes.

There is a window for a conflict to arise if the file is being edited locally and remotely at the same time. What will happen in that case is that rclone will save your local changes over the remote changes I think.

No, you are at the mercy of the cache here!

Fantastic, thank you again for explaining all that. I'm looking forward to getting this setup and giving it a try.

No, you are at the mercy of the cache here!

I am genuinely surprised by this...I wonder if there is just a simple shell command that will just read the entire contents of any file in the background and then immediately close. Basically forcing the file to now be cached....I'm going to have to do some searching on this.

My first thought was to use touch but because of the way the "full mode" caching works;

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

That doesnt sound like it would work, I would need something that would read every bit in the file or folder to cache....hmmmm...

You can cat the file to /dev/null or use something like rclone md5sum to read a lot of files

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.