FWIW, I honestly believe investigating the performance issues on the synology running rclone is a rabbit hole for now. I'd prefer to not get tunnel vision on it and derail the topic which is about storing VFS cache on an NFS mount. But here is the information you requested:
Inconsistent speeds experienced by NFS clients when accessing the VFS exported via NFS was the biggest issue varying widlly between a few KB/sec to 50+ MB/sec. Those same clients have zero issues accessing NFS exports that are from non-VFS/non-rclone mounts. The synology would experience a load average of over 40 when uploading files via rclone mount or via rclone copy/move. I have a cron job on another machine that runs an rclone copy job from an NFS export on the synology with zero issue.
It honestly seems like the synology is too lean in its current hardware configuration to run rclone on it directly.
the rclone.conf is simple:
[gdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
team_drive =
[gcrypt]
type = crypt
remote = gdrive:/gdrive
password =
password2 =
Here are several of the mount commands I tried. I experienced the same performance issues with all of them. Bear in mind: I tried many more combinations than just this: small buffer, large read ahead/chunk size. Large buffer, small read ahead/chunk size.. small buffer, small chunk size, large read ahead.. so forth and and so on. Largest setting I tried for any of those options was 256M (I know is excessive for buffer on a system with 2 GB RAM).
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--buffer-size 10M \
--vfs-cache-mode full \
--vfs-read-ahead 100M \
--vfs-read-chunk-size 100M \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--buffer-size 10M \
--vfs-cache-mode full \
--vfs-read-ahead 10M \
--vfs-read-chunk-size 10M \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--vfs-cache-mode full \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--buffer-size 50M \
--vfs-cache-mode full \
--vfs-read-ahead 50M \
--vfs-read-chunk-size 50M \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
Here are two of the rclone move/copy jobs that experienced issues when run from the synology but no issues when run from another system with paths changed. FWIW I adjusted the drive chunk size up and down and saw no change:
/bin/rclone copy /volume1/VMShare/dump/ gcrypt:/VMBackup --ignore-existing -P --drive-chunk-size 25M --tpslimit 1
/bin/rclone move /volume1/mnt/gdrive/ gcrypt:/ --log-file /volume1/vfs/rclone-move.log -v --fast-list --drive-stop-on-upload-limit --min-age 3d --tpslimit 1 --delete-empty-src-dirs
There are quite a few more but they all resulted in the same performance characteristics. I suspect the common denominator here is the 2 GB of RAM on the synology, possibly the swappiness configuration on the Synology, and lack of write cache SSD. I'm hoping to install a write cache SSD in the coming weeks as I have other applications that use this NAS which would benefit from that.
I can provide logs but I need to know what you are looking for because I have about 20 GB of DEBUG logs from rclone on the synology. I don't see anything in them indicative of problems.. just normal rclone logs like I see on my micro"server".
I've considered adding an M.2 to the dell for a dedicated device for the VFS cache but I'd like to investigate options that don't cost me money first. The current SSD (Samsung 860 Pro) is installed to the only SATA connection the motherboard has. USB performance on that system is absolutely abysmal when it comes to external drives - a Samsung T7 only reads/writes at ~20 MB/sec which isn't enough for some of my higher load situations and causes drastic bottlenecks in performance. I assume it is because Dell never intended for such a high IO on those interfaces and assumed only mice/keyboards would be connect to the USB so they cut corners on the USB hardware.
Neither of the posts you linked are related to my problem outside of the fact that they involve synology NAS. The first one involves SMB which I'm not using and the second one seems to be an outright issue to copy to remote, which I'm not having.