Help with cache mount for when Plex is separate from sonarr/radarr/sabnzbd?


#1

Hi folks,

I’ve been running Plex/Sonarr/Radarr/Sabnzbd with a cache mount and within the same machine for a while now and things have been running quite smoothly. Recently I decided to get a better server and use it exclusively for Plex. I moved plex over from the source to the destination server and was able to preserve the libraries and everything is running smoothly for content that’s already there. It’s not however, running so great when new content is added or removed from the mount by the processing server.

A couple of examples:

  1. When a movie is added by Radarr, the remote plex integration is not working. When the processing machine uploads the new content and Radarr sees it and triggers a Plex scan, the Plex machine still can’t see that content so it’s not automatically added to the library.
  2. A better quality version of a movie is found by Radarr so Radarr deletes the existing one, downloads a new one and uploads it but Plex continues to see the old version even though it’s already gone so the content becomes unplayable.

What’s the recommended approach to use a cache mount under these conditions? Is it just not possible? Will I need to remove the cache layer and mount directly? I like the cache mount better than VFS caching as it seems to work better with Sonarr/Radarr (i.e. they don’t become unresponsive too often while processing things and are very reliable).

Thanks!


#2

I believe I’ve been able to fix this. What I did was disable the write cache time on the “Processing” server so things are uploaded immediately. Relatively quickly I see cache expiry messages on the Plex server and Plex picks things up. Awesome!

EDIT: Actually not quite. Turning write caching off causes sonarr/radarr to hang and this causes import failures so things start getting left behind unless I manually import them. Hmm…


#3

I’m not quite following your workflow.

Is Plex on a separate server?

You have a few options for triggering scans or upgrading content. I keep the naming convention the same so if a file is deleted, the same file name replaces it.

You can also use something like:

To trigger scans if things aren’t remote and use the empty trash feature in that to make sure it deletes old content if it isn’t named the same.

Writing directly to the mount is problematic and just tends to cause issues so I use mergerfs (can use unionfs too) to write locally to disk first and do everything there as I can take advantage of hard links with mergerfs and things move smoothly. I just upload overnight.