First of all, thanks for making and maintaining Rclone, it's amazing and I have used it a lot (and am still using it daily) in my Homelab and other projects. I haven't seen this asked in the forum or on GitHub yet, so I wanted to ask here first if this is already a feature which I'm failing to set up, or if this could be a possible enhancement.
I was trying to mount a subdirectory of an NFS share, which I exported using rclone serve nfs, to a pod in my Kubernetes cluster, and was surprised when the pod mounted the root directory of the exported share instead (which worked perfectly, btw.). After failing to find any obvious errors in my config, I tried to mount the same subdirectory directly on my laptop, which also resulted in the root directory being mounted.
The command I used was:
sudo mount -t nfs -o port=2049,mountport=2049,tcp <IP>:/my-subdirectory /home/myuser/foo
Mounting worked without issues, but instead of /my-subdirectory, / was mounted.
Is it possible to achieve this already using Rclone, or is this expected? I wasn't able to find or tweak the options of the NFS export, so I'm guessing this is not possible. Maybe the share needs to be exported using the subtree_check option, but I'm not 100% sure, as I'm not very well versed in regards to NFS export or mount options, and I couldn't try it for the reasons mentioned above.
There's not a ton of information about rclone serve nfs in general, because it's still experimental, but I'm curious to hear if anyone knows more about this!
Before looking into rclone serve nfs inner workings I would test with share served by some standard NFS server. Can you mount sub dir in such case? Or maybe it is NFS general behaviour.
And obviously there is very simple workaround here - run rclone serve nfs with directory you plan to mount. You can run multiple instances of rclone serve if needed.
Thanks for the reply, @kapitainsky ! Before this feature was available as part of Rclone itself, I used to use a Docker simple container that did something similar. It was based on an alpine image with Rclone and NFS, where I could mount an Rclone config into the container and it would export the mount as an NFS share. This worked perfectly (with subdirectories as well), and I'm still using it currently while experimenting with Rclone's rclone serve nfs functionality. I used NFS v4 though, instead of Rclone's NFS v3, but this shouldn't make much of a difference for this use-case.
Thanks for your workaround idea, but unfortunately, I need to be able to mount subdirectories dynamically for this to work well with Kubernetes' csi-driver-nfs.
What I'm doing for the time being is mounting the NFS share's root directory in each pod, but then only accessing the specific subdirectory that the given pod needs. However, this is not optimal, because technically, each pod still has access to all data on the entire NFS share.
Short update. I tried this with different CSI providers (democratic-csi and csi-driver-nfs), as well as mounting it manually on my laptop using the command provided in the docs (but with a subdirectory path instead of /), as stated in my previous post. The behavior was the same in all cases, so regardless of which path I put after <IP>:, even invalid / nonexistent paths, resulted in the root directory (the one I exported using rclone serve nfs mymount:) being mounted.
To add some additional info: The NFS-mount I'm serving with Rclone is a mounted SMB share, which I chained to a crypt mount (SMB -> Crypt -> NFS). I wasn't sure whether this setup actually works, but I decided to just try it and it seems to work OK, even though it's slightly unstable when I use it in my pods (I occasionally get stale NFS file handles reported and the mounted volumes become unavailable in the pods). And I was pretty amazed that this entire setup is possible with just one command:
I used the exact same command (except with a dir-cache-time of 9999) in the past with a Dropbox mount (Dropbox -> Crypt -> NFS), and if I recall correctly, I believe this worked fine when I mounted subdirectories using NFS (but unfortunately, I'm not 100% sure if that was actually the case).