VFS Cache on NFS mount?

I'm floating the idea of pointing my rclone mount command to use an NFS mount for the location of the VFS cache. I'd like to know if anyone has done this before or anyone has any advice for it.

Background: I have a small "server" that runs my media server and "acquisition" services. It's one of those dell microtowers - works great for its intended purposes - super fast, stable, and I've been running it like this since April. Problem is it only has a 1 TB SSD in it so it does bottom out on storage if I have a heavy day of media acquisitions. That bottoming out occurs because the VFS cache fills and I hit the gdrive upload quota which halts the media acquisition process. I have plenty of fail safes to prevent it from filling up its own disk.

My goal here is to give myself more buffer for heavy download days.. so it can download a lot of stuff in a short period of time and push it into the VFS cache which will upload to the limit each day until the upload cache is emptied. This would make the downloaded content available sooner on such heavy days instead of ~750 GB/day.

I have a Synology DS820+ elsewhere on my network with plenty of room to spare on it and gigabit ethernet between it and the media server. I'm considering setting up an NFS export on it to use for this microtower server as a VFS cache location.. I'm just wary about pitfalls, config considerations I might be overlooking, etc. My biggest worry right now is the various nuances of adding an NFS export/mount into the flow - really just additional hurdles to getting it running smoothly (again).

I did set up rclone mount on the DS820+ but it didn't perform well at that task. It might be feasible with some tweaking/upgrades. Bear in mind the DS820+ I have only has 2 GB of RAM. I have the SSD expansion in it for read caching and hoping to add a second SSD for write caching the 4x HDDs in it.

I'd appreciate any helpful advice that could be given on this matter. I'll probably give it a shot next week as I honestly don't have anything to realistically lose (besides some downtime) by testing it out.

hi,

in what way did it not perform well?

can you post

  • redacted config file
  • exact command
  • debug log

just curious, for that dell, no free sata slots to add a $40.00 hard drive or use a $20.00 usb2sata adapter?

do these help? -
https://forum.rclone.org/t/intralan-sync-from-synology-nas/25491

https://forum.rclone.org/t/cant-copy-files-over-mounted-rclone-within-synology-file-station/25498

FWIW, I honestly believe investigating the performance issues on the synology running rclone is a rabbit hole for now. I'd prefer to not get tunnel vision on it and derail the topic which is about storing VFS cache on an NFS mount. But here is the information you requested:

Inconsistent speeds experienced by NFS clients when accessing the VFS exported via NFS was the biggest issue varying widlly between a few KB/sec to 50+ MB/sec. Those same clients have zero issues accessing NFS exports that are from non-VFS/non-rclone mounts. The synology would experience a load average of over 40 when uploading files via rclone mount or via rclone copy/move. I have a cron job on another machine that runs an rclone copy job from an NFS export on the synology with zero issue.

It honestly seems like the synology is too lean in its current hardware configuration to run rclone on it directly.

the rclone.conf is simple:

[gdrive]
type = drive
client_id = 
client_secret = 
scope = drive
token =
team_drive =

[gcrypt]
type = crypt
remote = gdrive:/gdrive
password = 
password2 = 

Here are several of the mount commands I tried. I experienced the same performance issues with all of them. Bear in mind: I tried many more combinations than just this: small buffer, large read ahead/chunk size. Large buffer, small read ahead/chunk size.. small buffer, small chunk size, large read ahead.. so forth and and so on. Largest setting I tried for any of those options was 256M (I know is excessive for buffer on a system with 2 GB RAM).

/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--buffer-size 10M \
--vfs-cache-mode full \
--vfs-read-ahead 100M \
--vfs-read-chunk-size 100M \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--buffer-size 10M \
--vfs-cache-mode full \
--vfs-read-ahead 10M \
--vfs-read-chunk-size 10M \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--vfs-cache-mode full \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log
/bin/rclone mount gcrypt: /mnt/gdrive \
--umask=002 \
--gid=65536 \
--uid=1028 \
--dir-cache-time 1000h \
--poll-interval 15s \
--cache-dir /volume1/vfs \
--buffer-size 50M \
--vfs-cache-mode full \
--vfs-read-ahead 50M \
--vfs-read-chunk-size 50M \
--transfers 4 \
--log-level DEBUG \
--log-file=/volume1/vfs/rclone.log

Here are two of the rclone move/copy jobs that experienced issues when run from the synology but no issues when run from another system with paths changed. FWIW I adjusted the drive chunk size up and down and saw no change:

/bin/rclone copy /volume1/VMShare/dump/ gcrypt:/VMBackup --ignore-existing -P --drive-chunk-size 25M --tpslimit 1
/bin/rclone move /volume1/mnt/gdrive/ gcrypt:/ --log-file /volume1/vfs/rclone-move.log -v --fast-list --drive-stop-on-upload-limit --min-age 3d --tpslimit 1 --delete-empty-src-dirs

There are quite a few more but they all resulted in the same performance characteristics. I suspect the common denominator here is the 2 GB of RAM on the synology, possibly the swappiness configuration on the Synology, and lack of write cache SSD. I'm hoping to install a write cache SSD in the coming weeks as I have other applications that use this NAS which would benefit from that.

I can provide logs but I need to know what you are looking for because I have about 20 GB of DEBUG logs from rclone on the synology. I don't see anything in them indicative of problems.. just normal rclone logs like I see on my micro"server".

I've considered adding an M.2 to the dell for a dedicated device for the VFS cache but I'd like to investigate options that don't cost me money first. The current SSD (Samsung 860 Pro) is installed to the only SATA connection the motherboard has. USB performance on that system is absolutely abysmal when it comes to external drives - a Samsung T7 only reads/writes at ~20 MB/sec which isn't enough for some of my higher load situations and causes drastic bottlenecks in performance. I assume it is because Dell never intended for such a high IO on those interfaces and assumed only mice/keyboards would be connect to the USB so they cut corners on the USB hardware.

Neither of the posts you linked are related to my problem outside of the fact that they involve synology NAS. The first one involves SMB which I'm not using and the second one seems to be an outright issue to copy to remote, which I'm not having.

for sure, that is the not the isssue.
i run rclone mount on a pizero with 512MB
and


sure, i understand. i will drop that discussion except to state:
i run rclone on a couple of synboxes and never had the issues you have experienced.

Please provide to me the mount/copy/move commands you are using so I can test on my end.

if you think memory is the limiting factor, that should be easy to prove and then workaround.

when i have time, i can vpn into one of the synboxes.
tho for the commands, i always run the simplest rclone commands, using as many defaults as possible.

in the meantime, why not just try your vfs cache over nfs idea, perhaps using default settings, it will perform acceptably.

I set up a test rclone mount on the micro"server" using an NFS mount from the synology as the VFS cache destination.. so far it has performed well with high bandwidth use.. I'm running an rsync job on the micro"server" copying a file from another NFS mount from the synology into the VFS. Low load averages on the synology and I'm gettting speeds I would expect from such a situation. No errors observed in the rclone log file.

great.


off topic but you know me by now.....

that is a super slow speed.
imho, that drive might be plugged into usb2.0, not usb3.x or a driver/config issue.
curious as to the output of lsusb and what port that external ssd is plugged into.

Bus 002 Device 003: ID 04e8:4001 Samsung Electronics Co., Ltd PSSD T7
Bus 002 Device 002: ID 174c:1153 ASMedia Technology Inc. ASM1153 SATA 3Gb/s brid                                                ge
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power                                                 Supply
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

if you have time might figure out for sure what port that usb drive uses
https://unix.stackexchange.com/questions/405991/identify-what-usb-port-is-a-device-plugged-into

anyhoo, good to know that the rclone vfs cache can point to a nfs share over a 1Gbps network and get the same speeds as rsync.

can post you post the rclone mount command?

/usr/bin/rclone mount gcrypt: /mnt/gdrive
--umask=002
--gid=2000
--uid=2000
--user-agent SomethingDifferent
--dir-cache-time 1000h
--poll-interval 15s
--cache-dir /mnt/vfs-nfs02/cache
--vfs-cache-mode full
--vfs-read-ahead 256M
--drive-chunk-size 256M
--log-level DEBUG
--log-file=/var/log/rclone.log

FWIW the box it is running on a 32 GB RAM and an i5-7500T with a 1 TB Samsung 860 Pro SATA SSD and a single gigabit ethernet connection onto my network.

What's the relevance to resolving the slowness issue? The computer has 6x USB ports - two on the front, four on the rear. Two of the rear ports are black with the remaining being blue. The two external drives attached to that server are plugged into the two blue USB ports in the rear.

lsusb -t indicates the ports are rated for up to 5000M which is in line with the 3.0 standard:

/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    |__ Port 3: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
    |__ Port 4: Dev 3, If 0, Class=Mass Storage, Driver=uas, 5000M
/:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/12p, 480M
    |__ Port 3: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 12M

If Dell cut corners on the USB controller in a way that would cause it to report it is 3.0 but only able to perform at 2.0 speeds it would not surprise me in the least. I can probably run some tests on it this weekend on the drive to rule that out as the performance limiter.. although both drives exhibited the same performance characteristics through hdparm, dd testing, and simulated use-case testing. The other drive is a laptop HDD on a USB3.0-SATA adapter.

based on
well, your post indicated the hell machine has usb2.0 and usb3.0 ports.
just want to be sure the usb drive was plugged into 3.0, not 2.0 by mistake.
been there, done that

anyhoo, glad you got vfs cache over nfs share working.

have a nice weekend.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.