macOS - nfsmount not releasing file handles | not leveraging existing cache contents on re-mount

What is the problem you are having with rclone?

On macOS (Sonoma 14.4) using rclone 1.66.0:

  1. File handles don't look to be released without re-mounting by kill <PID_OF_RCLONE_NFSMOUNT> and rclone nfsmount on macOS
  2. When re-mounting, the existing vfs-cache is disregarded with rclone stating 2024/03/17 14:47:38 INFO : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0) with new files that are cached being able to push the --vfs-cache-dir over the --vfs-cache-max-size when reviewed manually.
    The logs below hadn't hit the --vfs-cache-max-size so they're not able to point to this specific issue but I have been able to validate this on a previous session yesterday.

Run the command 'rclone version' and share the full output of the command.

~ % rclone version
rclone v1.66.0
- os/version: darwin 14.4 (64 bit)
- os/kernel: 23.4.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.22.1
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Union comprised of SMB and Dropbox backends.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

BASE_PATH="/Users/<USERNAME>/mounts/rclone"
LOG_FILE="${BASE_PATH}"/logs/rclone-union.log

rclone nfsmount \
--user-agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' \
--allow-other \
--async-read=true \
--dir-cache-time=1m \
--buffer-size=32M \
--timeout=10m \
--gid $(id -u) \
--uid $(id -g) \
--umask=002 \
--vfs-cache-mode=full \
--vfs-cache-max-size=300G \
--vfs-cache-max-age=168h \
--cache-dir="${BASE_PATH}"/cache/rclone-union \
--log-file="${LOG_FILE}" \
--daemon \
-v \
rclone-union: "${BASE_PATH}"/rclone-union

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

~ % rclone config redacted
[db]
type = dropbox
token = XXX

[rclone-db]
type = crypt
remote = db:SI
password = XXX
password2 = XXX
filename_encoding = base32768

[rclone-union]
type = union
upstreams = truenas-smb:atgc/local rclone-db:
create_policy = epff

[truenas-smb]
type = smb
host = XXX
pass = XXX

A log from the command that you were trying to run with the -vv flag

File handle usage snippet after mount has been running for 8 hours with a remount to show what happens with the cache directory after kill <PID_OF_RCLONE_NFSMOUNT>

2024/03/17 13:34:31 INFO  : vfs cache: cleaned: objects 1103 (was 1103) in use 1103, to upload 0, uploading 0, total size 73.252Gi (was 73.252Gi)
2024/03/17 13:35:31 INFO  : vfs cache: cleaned: objects 1107 (was 1107) in use 1107, to upload 0, uploading 0, total size 76.130Gi (was 76.130Gi)
2024/03/17 13:36:31 INFO  : vfs cache: cleaned: objects 1107 (was 1107) in use 1107, to upload 0, uploading 0, total size 77.934Gi (was 77.934Gi)
2024/03/17 13:37:31 INFO  : vfs cache: cleaned: objects 1107 (was 1107) in use 1107, to upload 0, uploading 0, total size 79.571Gi (was 79.571Gi)
2024/03/17 13:38:31 INFO  : vfs cache: cleaned: objects 1107 (was 1107) in use 1107, to upload 0, uploading 0, total size 82.938Gi (was 82.938Gi)
2024/03/17 13:39:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 86.197Gi (was 86.197Gi)
2024/03/17 13:40:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 88.582Gi (was 88.582Gi)
2024/03/17 13:41:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 90.231Gi (was 90.231Gi)
2024/03/17 13:42:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 92.517Gi (was 92.517Gi)
2024/03/17 13:43:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 95.843Gi (was 95.843Gi)
2024/03/17 13:44:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 99.359Gi (was 99.359Gi)
2024/03/17 13:45:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 101.423Gi (was 101.423Gi)
2024/03/17 13:46:31 INFO  : vfs cache: cleaned: objects 1108 (was 1108) in use 1108, to upload 0, uploading 0, total size 103.077Gi (was 103.077Gi)
2024/03/17 13:47:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 105.964Gi (was 105.964Gi)
2024/03/17 13:48:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 109.425Gi (was 109.425Gi)
2024/03/17 13:49:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 112.270Gi (was 112.270Gi)
2024/03/17 13:50:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 114.140Gi (was 114.140Gi)
2024/03/17 13:51:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 115.807Gi (was 115.807Gi)
2024/03/17 13:52:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 118.965Gi (was 118.965Gi)
2024/03/17 13:53:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 122.242Gi (was 122.242Gi)
2024/03/17 13:54:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 124.993Gi (was 124.993Gi)
2024/03/17 13:55:31 INFO  : vfs cache: cleaned: objects 1109 (was 1109) in use 1109, to upload 0, uploading 0, total size 126.901Gi (was 126.901Gi)
2024/03/17 13:56:31 INFO  : vfs cache: cleaned: objects 1119 (was 1119) in use 1119, to upload 0, uploading 0, total size 128.767Gi (was 128.767Gi)
2024/03/17 13:57:31 INFO  : vfs cache: cleaned: objects 1142 (was 1142) in use 1142, to upload 0, uploading 0, total size 131.796Gi (was 131.796Gi)
2024/03/17 13:58:31 INFO  : vfs cache: cleaned: objects 1149 (was 1149) in use 1149, to upload 0, uploading 0, total size 133.150Gi (was 133.150Gi)
2024/03/17 13:59:31 INFO  : vfs cache: cleaned: objects 1165 (was 1165) in use 1165, to upload 0, uploading 0, total size 134.845Gi (was 134.845Gi)
2024/03/17 14:00:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 137.105Gi (was 137.105Gi)
2024/03/17 14:01:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 140.645Gi (was 140.645Gi)
2024/03/17 14:02:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 144.605Gi (was 144.605Gi)
2024/03/17 14:03:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 147.832Gi (was 147.832Gi)
2024/03/17 14:04:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 152.100Gi (was 152.100Gi)
2024/03/17 14:05:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 154.562Gi (was 154.562Gi)
2024/03/17 14:06:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 157.066Gi (was 157.066Gi)
2024/03/17 14:07:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 160.445Gi (was 160.445Gi)
2024/03/17 14:08:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 163.180Gi (was 163.180Gi)
2024/03/17 14:09:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 167.229Gi (was 167.229Gi)
2024/03/17 14:10:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 170.962Gi (was 170.962Gi)
2024/03/17 14:11:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 174.582Gi (was 174.582Gi)
2024/03/17 14:12:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 177.059Gi (was 177.059Gi)
2024/03/17 14:13:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 178.674Gi (was 178.674Gi)
2024/03/17 14:14:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 182.153Gi (was 182.153Gi)
2024/03/17 14:15:31 INFO  : vfs cache: cleaned: objects 1169 (was 1169) in use 1169, to upload 0, uploading 0, total size 185.503Gi (was 185.503Gi)
2024/03/17 14:16:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 188.224Gi (was 188.224Gi)
2024/03/17 14:17:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 191.302Gi (was 191.302Gi)
2024/03/17 14:18:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 196.140Gi (was 196.140Gi)
2024/03/17 14:19:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 199.808Gi (was 199.808Gi)
2024/03/17 14:24:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 214.011Gi (was 214.011Gi)
2024/03/17 14:25:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 216.423Gi (was 216.423Gi)
2024/03/17 14:26:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 218.886Gi (was 218.886Gi)
2024/03/17 14:27:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 222.021Gi (was 222.021Gi)
2024/03/17 14:28:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 225.185Gi (was 225.185Gi)
2024/03/17 14:29:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 228.271Gi (was 228.271Gi)
2024/03/17 14:30:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 231.474Gi (was 231.474Gi)
2024/03/17 14:31:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 234.022Gi (was 234.022Gi)
2024/03/17 14:32:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 236.419Gi (was 236.419Gi)
2024/03/17 14:33:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 238.658Gi (was 238.658Gi)
2024/03/17 14:34:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 240.993Gi (was 240.993Gi)
2024/03/17 14:35:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 243.390Gi (was 243.390Gi)
2024/03/17 14:36:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 245.954Gi (was 245.954Gi)
2024/03/17 14:37:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 249.023Gi (was 249.023Gi)
2024/03/17 14:38:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 252.241Gi (was 252.241Gi)
2024/03/17 14:39:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 255.536Gi (was 255.536Gi)
2024/03/17 14:40:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 258.705Gi (was 258.705Gi)
2024/03/17 14:41:31 INFO  : vfs cache: cleaned: objects 1170 (was 1170) in use 1170, to upload 0, uploading 0, total size 261.856Gi (was 261.856Gi)
2024/03/17 14:42:17 INFO  : Signal received: terminated
2024/03/17 14:42:17 INFO  : Dropbox root 'SI': Committing uploads - please wait...
2024/03/17 14:42:17 ERROR : /Users/<USER_NAME>/mounts/rclone/rclone-union: Unmounted rclone mount
2024/03/17 14:42:17 INFO  : Exiting...
Sun Mar 17 14:42:37 CDT 2024: STARTING RCLONE-UNION MOUNT #################
2024/03/17 14:42:38 INFO  : union root '': poll-interval is not supported by this remote
2024/03/17 14:42:38 INFO  : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/03/17 14:42:38 NOTICE: NFS Server running at 127.0.0.1:52195
2024/03/17 14:42:42 INFO  : Dropbox root 'SI': Committing uploads - please wait...
Sun Mar 17 14:42:42 CDT 2024: RCLONE-UNION MOUNT STARTED SUCCESSFULLY #################
2024/03/17 14:43:38 INFO  : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/03/17 14:44:38 INFO  : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/03/17 14:45:38 INFO  : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/03/17 14:46:38 INFO  : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/03/17 14:47:38 INFO  : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)

Actual cache directory size after remounting. Would expect rclone nfsmount to show 261.856Gi, not 0Gi

~ % date && du -h -d 1 ~/mounts/rclone/cache/rclone-union
Sun Mar 17 14:45:06 CDT 2024
4.6M    /Users/<USER_NAME>/mounts/rclone/cache/rclone-union/vfsMeta
262G    /Users/<USER_NAME>/mounts/rclone/cache/rclone-union/vfs
262G    /Users/<USER_NAME>/mounts/rclone/cache/rclone-union

Why would you expect it to be 0G?

Sorry, I meant I'd expect the output to read

2024/03/17 14:47:38 INFO : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 261.856Gi (was 261.856Gi)

instead of

2024/03/17 14:47:38 INFO : vfs cache: cleaned: objects 1168 (was 1168) in use 0, to upload 0, uploading 0, total size 0 (was 0)

as the vfs/vfsMeta cache files are still populated but rclone states it's working on an empty cache after a remount.

You'd want to put in debug and share the actual full log.

Cache only gets cleaned up by polling so killing it would have no impact.

I realized that I needed to increase verbosity after posting this so I re-mounted with -vv. I'm used to the cache never getting killed with rclone mount which is why I wanted to raise this to see if there's a reason for the difference between mount and nfsmount. FUSE in macOS is so painful to manage that the upside of nfsmount is great but I'm wondering if I stumble on something that others have also noticed since the 1.66.0 release (as that's when I adopted it).

The log output with -vv is _ very busy_ with nfs directory reads and current access of the mount from clients. I'd prefer to avoid sharing it without some sanitization if that's acceptable. I'm also getting pulled by family to disconnect right now so I'll have to pick that up later tonight. I really appreciate the quick engagement and don't want to lose momentum. I'll definitely be back on this later tonight.

So, in the hours since I last posted here, that log with debug enabled grew to 30GB. To avoid that much output, I renamed my "production" mount and forked the mount to use the same locations but with a much smaller --vfs-cache-size of 500M. With this new mount being isolated from anything accessing it, having it's own log, etc I was able to manually flip through some files to build the cache up and see what happened. The resulting log file from this testing is 20.9MB so I couldn't paste it anywhere directly. It can be downloaded from this link.

The updated script used for mounting is below for reference.

#!/bin/bash

set -eux

BASE_PATH="/Users/dabigc/mounts/rclone"
LOG_FILE="${BASE_PATH}"/logs/rclone-union.log
# PORT=59876

echo "$(date): STARTING RCLONE-UNION MOUNT #################" >> "${LOG_FILE}"

rclone nfsmount \
--user-agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' \
--allow-other \
--async-read=true \
--dir-cache-time=1m \
--buffer-size=32M \
--timeout=10m \
--gid $(id -u) \
--uid $(id -g) \
--umask=002 \
--vfs-cache-mode=full \
--vfs-cache-max-size=500M \
--vfs-cache-max-age=168h \
--cache-dir="${BASE_PATH}"/cache/rclone-union \
--log-file="${BASE_PATH}"/logs/rclone-union.log \
--daemon \
-vv \
rclone-union: "${BASE_PATH}"/rclone-union

echo "$(date): RCLONE-UNION MOUNT STARTED SUCCESSFULLY #################" >> "${LOG_FILE}"

Cache usage

Empty cache for mount before mounting

~ % date && du -h -d 1 ~/mounts/rclone/cache/rclone-union
Sun Mar 17 21:12:26 CDT 2024
  0B    /Users/dabigc/mounts/rclone/cache/rclone-union

Mount via script initiated

~ % ./rclone_mount_testing.sh
+ BASE_PATH=/Users/dabigc/mounts/rclone
+ LOG_FILE=/Users/dabigc/mounts/rclone/logs/rclone-union.log
++ date
+ echo 'Sun Mar 17 21:12:34 CDT 2024: STARTING RCLONE-UNION MOUNT #################'
++ id -u
++ id -g
+ rclone nfsmount '--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' --allow-other --async-read=true --dir-cache-time=1m --buffer-size=32M --timeout=10m --gid 505 --uid 20 --umask=002 --vfs-cache-mode=full --vfs-cache-max-size=500M --vfs-cache-max-age=168h --cache-dir=/Users/dabigc/mounts/rclone/cache/rclone-union --log-file=/Users/dabigc/mounts/rclone/logs/rclone-union.log --daemon -vv rclone-union: /Users/dabigc/mounts/rclone/rclone-union
++ date
+ echo 'Sun Mar 17 21:12:40 CDT 2024: RCLONE-UNION MOUNT STARTED SUCCESSFULLY #################'

After unmounting first mount and building up the cache by accessing some files from the mount

~ % date && du -h -d 1 ~/mounts/rclone/cache/rclone-union
Sun Mar 17 21:15:54 CDT 2024
160K    /Users/dabigc/mounts/rclone/cache/rclone-union/vfsMeta
475M    /Users/dabigc/mounts/rclone/cache/rclone-union/vfs
475M    /Users/dabigc/mounts/rclone/cache/rclone-union

After re-mounting, rclone doesn't properly indicate the actual cache size that's still on disk

On line 55821 in the log, it can be observed that rclone doesn't match the actual cached directory size from above after re-mounting. I'm used to seeing the end of the line in the log read 475Mi instead of 1.996Mi after a re-mount.

2024/03/17 21:16:25 INFO  : vfs cache: cleaned: objects 40 (was 40) in use 0, to upload 0, uploading 0, total size 1.996Mi (was 1.996Mi)

In this testing, I did observe rclone clean the cache up to <300M and then build back up on the second mount so it seems I may be used to monitoring rclone a certain way (via INFO logs from rclone) and that's not necessarily how I should be monitoring cache usage anymore (or specifically with rclone nfsmount).

File handles

It can be observed in the logs (on lines like the INFO on 55537) that rclone keeps file handles in use. I'd expect files to be released once they're no longer actively in use.

2024/03/17 21:15:35 INFO  : vfs cache: cleaned: objects 42 (was 42) in use 42, to upload 0, uploading 0, total size 486.845Mi (was 684.198Mi)

My main concern here is that I hit limits where rclone crashes or needs re-mounting to happen on a regular cadence. That seems like it may not necessarily be a concern though as my "production" mount that has a lot of client usage hasn't gone nuts since it was re-mounted earlier today.

~ % tail -fn100 ~/mounts/rclone/logs/<REDACTED>-union.log | grep INFO
2024/03/17 21:47:00 INFO  : vfs cache: cleaned: objects 128 (was 128) in use 127, to upload 0, uploading 0, total size 296.260Gi (was 296.260Gi)
2024/03/17 21:48:00 INFO  : vfs cache: cleaned: objects 128 (was 128) in use 127, to upload 0, uploading 0, total size 296.443Gi (was 296.443Gi)
2024/03/17 21:49:00 INFO  : vfs cache: cleaned: objects 128 (was 128) in use 127, to upload 0, uploading 0, total size 296.619Gi (was 296.619Gi)
2024/03/17 21:50:00 INFO  : vfs cache: cleaned: objects 128 (was 128) in use 127, to upload 0, uploading 0, total size 296.795Gi (was 296.795Gi)
2024/03/17 21:51:00 INFO  : vfs cache: cleaned: objects 128 (was 128) in use 127, to upload 0, uploading 0, total size 296.982Gi (was 296.982Gi)
---
 ~ % lsof | awk '{print $1}' | sort | uniq -c | sort -rn | grep -i rclone
 154 rclone

I had observed quite the build-up and slow down when scanning this mount for the first time with Plex. It was about 5,000 file handles per the in-use output in the rclone INFO log and lsof | awk '{print $1}' | sort | uniq -c | sort -rn | grep -i rclone. Perhaps that would've cleared up given enough time but this was after a couple of hours where the library scan itself was complete.

Fin (maybe)

I don't know whether to classify anything here as an improvement or differentiation with rclone nfsmount's functionality but it seems like outside of large scans (which I don't do regularly) on the mount file handle usage, I should be ok in general use. The cache logging is odd to me but cleanup seems to still happen appropriately (given time to do so) even if the INFO output I'm used to doesn't have the consistency I've built up over the years of using rclone on Linux and now using it on Mac. If there's any nuggets of useful context here, great. Otherwise, this may just be a post to save someone else in a similar situation from asking the questions I posed originally. If there are thoughts or other experiments I can run in my environment that maintainers would like me to try around these differences I've observed, I'm game to do so. Thanks!

Logs look perfectly normal and expected.

If you set the size to 500M, it tries to keep that size but if it overfills, it cleans up every minute.

Normal polling every minute:

2024/03/17 21:12:35 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2024/03/17 21:13:35 INFO  : vfs cache: cleaned: objects 12 (was 12) in use 12, to upload 0, uploading 0, total size 140.108Mi (was 140.108Mi)
2024/03/17 21:14:35 INFO  : vfs cache: cleaned: objects 14 (was 14) in use 14, to upload 0, uploading 0, total size 163.404Mi (was 162.404Mi)
2024/03/17 21:15:35 INFO  : vfs cache: cleaned: objects 42 (was 42) in use 42, to upload 0, uploading 0, total size 486.845Mi (was 684.198Mi)
2024/03/17 21:16:25 INFO  : vfs cache: cleaned: objects 40 (was 40) in use 0, to upload 0, uploading 0, total size 1.996Mi (was 1.996Mi)
2024/03/17 21:17:26 INFO  : vfs cache: cleaned: objects 50 (was 50) in use 21, to upload 0, uploading 0, total size 248.214Mi (was 248.214Mi)
2024/03/17 21:18:25 INFO  : vfs cache: cleaned: objects 51 (was 51) in use 24, to upload 0, uploading 0, total size 334.640Mi (was 334.640Mi)
2024/03/17 21:19:25 INFO  : vfs cache: cleaned: objects 40 (was 61) in use 40, to upload 0, uploading 0, total size 490.389Mi (was 613.049Mi)
2024/03/17 21:20:25 INFO  : vfs cache: cleaned: objects 48 (was 48) in use 48, to upload 0, uploading 0, total size 497.333Mi (was 923.672Mi)

Cache area doesn't seem to change.

"vfs cache: data root is" rclone-union.log
2024/03/17 21:12:35 DEBUG : vfs cache: data root is "/Users/dabigc/mounts/rclone/cache/rclone-union/vfs/rclone-union"
2024/03/17 21:16:25 DEBUG : vfs cache: data root is "/Users/dabigc/mounts/rclone/cache/rclone-union/vfs/rclone-union"

If you issue a 'kill' to the mount, it won't release file handles and such until the processes hitting the mount are stopped. On Linux/MacOS, you want to ensure all things hitting a process are stopped and then kill a network mount. Most folks will either put dependencies on systemd to shut stuff down or you can use something like 'fuser -cu ' as root and see what still has open files and stop them.

The cache area doesn't care though as it'll clean up on the cycles as seen in the logs.

Welp, that didn’t hold true for very long. After re-mounting my “production” mount to remove debug logging last night, I woke up to a full disk this morning as the cache had grown to 576GB (even though the --vfs-cache-max-size is set to 300GB. The cache was at 276GB before re-mounting last night and added and additional 300GB over night so cleanup isn’t actually working which was my original concern. It seems the process is going to have to be unmount, delete the cache and re-mount for me when I need to re-mount or reboot. If others are seeing the same thing with rclone nfsmount, it’d be great to hear if it’s something unique to my config or not.

You'd want to share the log with the issue as none of the logs have shown any problems with the cache cleaning up.

The cache can go over size based on your use of the cache as it's not a hard limit.

Will do. I’ll be able to do more testing tomorrow night to reproduce the issue. I have a bit less flexibility without re-mounting as my attempts to use --rc also failed on macOS so I pulled it from my nfsmount command. I’ll dig into that as well to see if it’s something unique to my system. Another thing I used heavily on Linux and have to figure out why I’m having trouble on macOS with. Thanks again for the help!

Killing any network mount is tricky. You really want to ensure you stop all other processes before you kill it.

I've been unable to reproduce the cache cleanup issues with a smaller data set. It only seems to happen with --vfs-cache-max-size set to a larger size (like 300G) for me.

I was able to resolve my inability to use --rc. It was specifically when using the --daemon flag and I resolved it by adding --daemon-wait 0 which allowed rclone nfsmount to succeed in daemon mode and allow remote control of the running rclone process. At this point, I'm backing away slowly from this issue unless someone else reports something similar and we need to compare notes. Thanks!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.