Two server setup advice with mergerfs and Plex

Hi all,
I'm quite new to the world of rclone and mergerfs and would really appreciate the thoughts/feedback on how to best go with my setup.

I have two servers, Server A will host the 'arrs to grab things which will ultimately end up on the teamdrive.
Server B will purely run Plex.
I have multiple TD's that I can use if required.

My thoughts.
I'd like to set up Mergerfs on server A, so the 'arrs will download and post-process in to my /media folder (/media/tv & /media/movies etc).
/media will be the mergerfs mount which will point to /local and to TDa:
As I understand it (from having a look at @Animosity022's scripts). providing I have /local as the first mountpoint for mergerfs, it will write to that first instead of TDa:
I can then use the scripts to "move" to TDa: after a day or two.

For Server B (Plex), I could just mount TDa: and have Plex read directly from that. But from looking around I understand that could cause issues? And also when plex is scanning the files, I'm assuming it's better if they were "local"?

What would your suggestions be for getting the newly downloaded items on server A (/local) over to Server B, before Server A does a move to TDa:

I'm trying to do it in a way that avoids double counting files.
I was thinking to run a script every 24 hrs that does an rclone copy from /local to TDc (An Intermediary TD) and then does an rclone move from /local to TDa:
This way it will still be in /media (just on TDa) and so the 'arrs database will still be in tact

Then on Server B.
Run a script every 24 hrs (but 12 hrs later than Server A) to rclone move from TDc to Server B /local
So plex can catalogue it etc
Then 12 hrs later (around the same time that Server A is doing an rclone copy to TDc), I do an rclone move from Server B /local to TDb - which is also set up with mergerfs under /media (which Plex would point to), so again Plex still see's the files under /media and will then stream from there.

My only thoughts are.
If more stuff gets downloaded whilst Server A is doing a copy to TDc, it would get missed on the copy, but would still get moved to TDa. So whilst it would appear in the 'arrs d/b... it would never go to TDc and so would never end up on Plex.

My solution around that is doing a weekly sync between TDa and TDb - so that way, if anything did get missed, it would then be sync'd on the weekly sync to TDb and although plex would have to "download" it to scan it.. it would only be the odd one or two files.

I know I've completely over-engineered this.. but I'm trying to look at a solution that covers all angles and eventualities.

Is there an easier way that I'm missing?

Is it possible to maybe do an rclone copy from server A to TDc "if file is <2 days old" and then only do the rclone move from Server A to TDa if files >2 days or something like that?

All help/thoughts appreciated! :slight_smile:

I have 2 servers, this is what I do and recommend you do.

I use megerfs on sever A with an upload script that will upload to the remote. You should have the downloaders and Arr's on the same server (A). Only plex needs to be on server B (and plex_autoscan if using that)

On server B, I configured plex to disable all automatic scanning. Since I only upload from serverA at night, I setup a cron job on ServerB that executes the plex media scanner at 6am to pick up all the files.

If you wanted to upload 24/7, then you need to use something like plex_autoscan or just set plex to scan the library every hour (or 2 hours).

You won't run into issues or anything, unless somehow you can upload 10TB within 24/hrs and plex needs to scan it all, then you might run into the 10tb download daily limit.

I'm trying to do it in a way that avoids double counting files.
I was thinking to run a script every 24 hrs that does an rclone copy from /local to TDc (An Intermediary TD) and then does an rclone move from /local to TDa:
This way it will still be in /media (just on TDa) and so the 'arrs database will still be in tact

Then on Server B.
Run a script every 24 hrs (but 12 hrs later than Server A) to rclone move from TDc to Server B /local
So plex can catalogue it etc
Then 12 hrs later (around the same time that Server A is doing an rclone copy to TDc), I do an rclone move from Server B /local to TDb - which is also set up with mergerfs under /media (which Plex would point to), so again Plex still see's the files under /media and will then stream from there.

Don't do this, it doesn't make sense and doesn't give you any benefit.

I'm not sure what issues you were reading but the only ones come to mind are:

  • You have to wait for files to upload before appearing on plex with a 2 server setup. That's a given and not an issue.
  • You can't use Arr's connect to Plex functionality to have it invoke the plex scan, since the files aren't uploaded yet and there's no way to configure a delay in Arr. People who want files available on plex as soon as they are uploaded typically use plex_autoscan. I personally just use a cronjob to kick off the scan manually since I only upload at a specified time anyway and when the cronjob starts all files should have been uploaded by then.

The way your mergerfs should be is /local and /remote are combined into /merged. You use/mount the merged volume to the Arr containers. You use the /local folder for your downloaders. In your setup, plex can just use the rclone remote itself without mergerfs.

If you are using docker

The way you describe your mounts will not be ideal for the use in docker because you won't be able to hardlink (aka move files). Instead with your folder hierarchy, Arr will be forced to copy the file from your downloads folder into the media folder then delete the downloads file.
You don't want that because it takes a lot longer to import, it wasted IO, causes IOWait, and requires 2x the space to work. Instead you need to put everything under a parent folder. Additionally all the paths inside the docker containers much match between containers for everything to work. The host location doesn't matter.

This is my setup:

/data/media - You only need to mount this for plex
/data/downloads - You want /data/downloads to be mapped to your /local/downloads folder, not the mergerfs for the best IO performance.

For Arr's. You mount a single /data, which should point to your mergerfs, you do not mount /data/media and /data/downloads separately! This is very important! In docker, every volume you specify is treated as a completely different filesystem and device, even if it's really not on the host. So if you mount /data to Arr, it can freely do a rename/move instead of being forced to copy the file.

Here is an example of a docker-compose that shows you the right way and the wrong way.
Remember downloads and media need to be under the same parent folder on your host!

nzbget:
volumes:
- "/mnt/local/downloads:/data/downloads"

You don't want to use /mnt/merged/downloads here if you are about the fastest IO speeds.

plex:
volumes:
- "/mnt/merged/media:/data/media"

In your case this could just be /mnt/remote/media, since there's no point to using mergerfs on serverB.

radarr:
volumes:
- "/mnt/merged:/data"

It needs both downloads and the media folder to be apart of the same binding to be able to rename and move. Arr will move files from /data/downloads to /data/media instantly. The files will end up in /mnt/local/media on your host.

Example of doing it WRONG

sonarr:
volumes:
- "/mnt/merged/media/tv:/tv"
- "/mnt/local/downloads:/downloads"

This won't work as you expect. This is likely the issues people were talking about. It's a shame that a lot of container documentation and examples use/suggest paths that prevent hardlinking inside docker containers (looking at you linuxserver.io!)
First, nzbget/sab/(other downloaders) will tell Arr to import from /data/downloads/[...], but it doesn't exist here so you would see could not import - can't find file errors. This is why the inside folders need to be consistent between containers!
Second, you declared separate volumes for downloads and the tv folder, it will be forced to copy every file, which is slow and a waste. Remember each volume binding is treated by docker to be a separate filesystem and device, even if they aren't on the host. This is why you need to put downloads and media under the same folder on the host and just mount the parent folder!

As soon as files are uploaded from ServerA to your remote, they will be visible within a minute on the same remote mounted to ServerB. So there's no need for that weird idea you had.

I hope this helps!

Thank you @wavlinky for a really comprehensive response!
I could well be misremembering, but I thought I had read that if plex is scanning files on a teamdrive, then it causes api quota errors (maxes out on hits?)

So you are suggesting something like this?
download clients to download to /download/torrents or /download/usenet (what I have at the moment).
'arrs to post process to /media/tv or /media/films (/media being mergerfs of /local and tda:) which would initially go to /local/tv or /local/films

Then rclone move from /local/* to tda: as a cronjob

Server B to have autoscan disabled, but then set media scan to run in the morning (allowing enough time for Server A to upload to TDa) and then Plex will just scan all new media uploaded to TDa. Scanning over an internet link to TDa won't be an issue?

If I choose that route, how is best to mount the TD. Is an rclone mount ok? or should I use something like Plexdrive? I'm guessing Plexdrive wouldn't be any good if Plex is creating posters etc as isn't plexdrive read only?

Thanks again for your help and advice!

Why overly complicated it with mergefs? Can't you just have Sonarr/Radarr point to the rclone mount as the final destination for the downloads, even if it's on a different machine? Let Sonarr/Radarr handle the file moves/post process after whatever has grabbed the files. What am I missing as to the benefit of using unionfs or mergefs?

As Teamdrives have an upload limit of 750gb per day.. I don't know how the arrs would handle the copy/move if it hit that limit, if it would fail/error etc.

At least by having it saved locally and scheduling an rclone move.. I know that if it hits the limit, it will pause and wait till the quote was reset and continue.

you mount it just like a drive. the paths you're using will not work, this reasons i explained in my past. you need to change your paths so downloads and media are underneath the same parent folder
/data/media
/data/downloads

bind Arr to /data, just one volume bind just like i have it.

you miss flexibility. you can't schedule uploads, you can't use service accounts to upload more. if an upload fails, it has to make another copy.

on my server, uploading eats CPU due to crypt. so i schedule it at night so it doesn't impact other services.

at this point, you can do the same thing with an rclone union now. so you don't need it anymore if you're not really pooling drives

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.