Vfs-cache-max-size recommended value

What is the problem you are having with rclone?

I'm having what appears to be a disk I/O issue when downloads take place on my server via nzbget and then are processed/uploaded to the vfs-cache. I currently have nzbget download to a dedicated SSD hard drive and everything else, Plex, Sonaar/Radarr, vfs-cache, etc.. is on a SATA mirrored software Raid-1 volume. Everything runs in a container, except rclone which runs on the native system. Moving downloads to a dedicated SSD drive helped somewhat, but after it's downloaded and processed by Sonarr/Radarr I get high disk I/O and processing on the Raid-1 drives which slows down the entire system and typically causes DB errors and other things like Plex itself to slow way down within the menus from the client. Web pages run slow, etc...the system just is bogged down while downloads are processed.

My question is if I move vfs-cache to SSD will this help? I only have limited space on a SSD drive, 512BGB. My current cache is 1500G. Will a lower cache be detrimental to my setup? Cache is working great right now until I have files processed by the system. Would this be enough space for an adequate cache? Is there any benefits where this might help with the main system disk I/O challenges? I typically have no more than 4 users on the system at a time.

Run the command 'rclone version' and share the full output of the command.

Which cloud storage system are you using? (eg Google Drive)

Google Drive, Raid-1 SATA, and dedicated SSD for nzbget downloads. I also have a free SSD I could use for vfs-cache

The rclone config contents with secrets removed.

[Unit]
Description=RClone Service
AssertPathIsDirectory=xxxxx ( **obfuscated**)
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=RCLONE_CONFIG= **obfuscated**
RestartSec=5
ExecStart=/usr/bin/rclone mount gmedia:  **obfuscated**
###setting from Animosity - https://github.com/animosity22/homescripts/blob/master/systemd/rclone-drive.service
   --allow-other \
   --dir-cache-time 5000h \
   --log-file /home/ **obfuscated**/log/rclone.log \
   --log-level INFO \
   --poll-interval 10s \
   --umask 002 \
   --rc \
   --rc-addr 127.0.0.1:5574 \
   --rc-no-auth \
   --cache-dir=/home/ **obfuscated**/rclone-cache \
   --drive-pacer-min-sleep 10ms \
   --drive-pacer-burst 200 \
   --vfs-cache-mode full \
   --vfs-cache-max-size 1500G \
   --vfs-cache-max-age 5000h \
   --vfs-cache-poll-interval 5m \
   --bwlimit-file 32M
##removed 9/30/22   --vfs-read-ahead 1G \
##   --tpslimit 10 \
##   --tpslimit-burst 10 \
##   --disable-http2
ExecStop=/bin/fusermount -uz /home/ **obfuscated**/union-acd-upload
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5574 _async=true
User= **obfuscated**
Group= **obfuscated**

There isn't a recommended value as it all depends on your disk space available to use for caching.

Generally, being IO bound is a bad thing as disk is always going to be the slowest point.

I don't keep my plex data on the same spot as my other disks.

I use a SSD for cache.

@Animosity022 how large is your SSD cache? I'm wondering if I built my system wrong and should have used the SSD's for all my main docker images and then used SATA for downloads. I don't have a lot of space though on SSD's. 2xSATA's are 4TB and 2xSSD are 512GB. I could dedicated an entire SSD just for caching as I'm not using it currently.

What is the negative of going from 1500GB to 512GB for gdrive? Would it potentially exceed API calls to Gdrive?

Google Drive, you have 1 billion API calls per day so you can't ever reasonably hit that.

I run all my stuff from this setup:

1TB SSD - OS + all my docker data (Plex)
2TB SSD - Rclone cache drive
12TB HD - All my data is stored here and copied over to rclone and uploaded hourly via the mount

I use 750GB max for TV / 500GB for Movies and remaining space is temporary storage for uploading. My only problem would be if I downloaded more in an hour than I could upload but I've not seen that even come close.

Yeah I think I may have to rebuild my system using SSD for boot and all my plex stuff. Use SATA for storage, cache, etc..

I also implemented ZFS on my SATA drives. Not sure if I'm having performance issues there, but I'm looking into it. RAM is not an issue. Are you using BTFS on your 12TB HD?

I sometimes wish there was a command to upload at a specific time, like 1am, what is in the cache. I don't necessarily think it's the cache upload causing the IO problems, but it's not helping when by default it uploads 5 mins after hitting it.

I'm using btrfs for all my mounts.

Plex does very well with it as it's lot of small files.

I thought I would reply back just in case others run into this. I actually extended the LVM which SSD #1 was attached to and added SSD #2. Out of that LVM, I allocated 600G to vfs-cache-max-size. Since last night everything has been running smooth, no issues with DB locks on the underlying support systems, Radarr/Sonarr. Essentially all downloads are now processed by a SSD LVM and all other systems run on the SATA LVM. In regards to running out of space, I just put a stop on NZBget if disk reaches 1G to pause downloads if/when it ever happens.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.