Help getting plex to only scan new content?

Hey guys

This is my setup.

  1. seedbox with torrents and newsgroups set up through sonarr / radarr.

  2. rclone to a gdrive

  3. Home ubuntu server, running plexdrive 4, mounting the drive.

  4. My drive is NOT mounted on my seedbox…

I havent bothered setting up any unionfs or anything like that. I just set my downloads to not monitor when deleted. If a proper comes along I manually download it.

What I’m trying to do is get plex to only scan new content.

I have just been setting up plex_autoscan, and am hoping to just run it on a cron every 1/2 hour or so.

Is this possible? Or is there a better way?

I’ve been using mergerfs for a bit now and I’ve noticed something that I’m not sure of.

I see entries like when I’m running an optimize:

Aug 29, 2018 13:35:03.428 [0x7f3927bff700] INFO - Library section 2 (TV Shows) will be updated because of a change in /gmedia/TV/MINDHUNTER/Plex Versions/1080-20M 945/MINDHUNTER

Which tells me something notified properly of a change on the file system which should come from fuse.

How does #2 work for you? Do you kick off a script to copy? Something needs to know when #2 is done so something automated can happen.

Yes I have a script that runs on a cron to copy. There must be a way to run another script when this script completes.

Are you not able to mount your GD? That’s really the easiest way to do it.

Otherwise, you’d have to add a line to ssh or something from your seedbox to your plex and kick off a scan via the command scanner:

What would you recommend if I mounted my GD?

So I personally just use a vfs-read-chunk-size mount along with a local storage and upload every night with a mergerfs mount. I find that works the best for me.

Ahhhhh I see.

I cant do that because my downloads are on my seedbox, and my plex server is at home.

Unless I’m misunderstanding something (that does happen from time to time), you can on the seedbox do something with unionfs/merger to create a local mount and combine it with a rclone mount.

You can upload from the seedbox each night.

The home plex server would pick up files every minute based on the default polling interval.

Yes that would be true that I could create a local unionfs/merger on the seedbox, but my plex server wont see them until they get uploaded to gdrive, which would be each night.

My plex server is half a world away, and could not be part of the unionfs/merger

Why would it be any different than you how upload now? Replicate the same pattern.

Because one of the points of the unionfs is that you can see files locally and in the cloud at the same time.

How will my plex server see the files if they are sitting on my seedbox waiting to be uploaded?

My plex server can only see my cloud files. If I want it to see newer stuff, I would need to download on the same machine…


  • Download file to unionfs/merger
  • run rclone upload every hour ( 15 minutes / any interval)
  • file is now on your GD


  • Once file is there, 1 minute later is appears.

The delay is as long you’d want to schedule the uploads.

Thats basically what I’m doing now, but I dont need the unionfs / merger.

The point of this thread was that once it hits my GD, plex is scanning the whole thing, and downloading 24/7 to look at things it already knows are there. I want to just scan new items or changes.

If you didn’t have your GD mounted on your seedbox, Sonarr/Radarr would see things disappear if you didn’t have a merged mount.

Plex doesn’t scan all your content, it only scans new content so if that was the point of your post, you have something else wrong that you haven’t shared.

Plex only scans new files. If the files are there already, it just checks time stamps and moves on.

I have youtube rips, home made videos, adult videos, podcast rips, custom remixes, dj mixes etc etc that have no scraper for them, and the file names aren’t compliant because they arent scene releases.

My GD is also sitting at almost 200TB.

Plexdrive wants to look at those EVERY TIME it scans, which turns a scan into an hour + ordeal while constantly downloading at 100mb/sec, putting my bandwidth through the roof and getting me warnings from my isp. There are also a few soap operas that it stumbles over (I believe its a naming issue, even through they have been run through sonarr)

My solution at the minute is to run a cron to only scan the sections which are compliant, and manually scan items in the other sections when I have added to them.

My plan for the weekend is to set up an old pc in my garage running cloudbox, but only run rclone, plexdrive, and plex_autoscan on it. I believe I should be able to update my plex machine from the plex_autoscan running on my LAN.

I tried to set up plex_autoscan on my plex machine but cant get it running, and dont want to wreck the setup I have now, as its almost perfect.

Run something like this and grab the last 5 or 6 lines against your plex library:

What is the output for this as here’s my last lines of that output?

21948 files in library
0 files missing analyzation info
0 media_parts marked as deleted
0 metadata_items marked as deleted
0 directories marked as deleted
21901 files missing deep analyzation info.

I have a similar setup and I think your best solution is plex_autoscan. It just works. You setup sonarr / radarr with webhooks to the home server running plex_autoscan, sonarr / radarr will notify your home server every time an episode or movie is imported. I have a 1 hour delay setup in plex_autoscan (3600), to give the feeder some time to upload the content. You can adjust this value to what you think works best.

PS - You should run plex_autoscan with systemd. The plex_autoscan.service file is included in the github setup.

Yes second that, plex autoscan is the way to go for a seedbox and plex server solution. Be aware to look at the config file of plex autoscan as it will scan the whole folder if it cannot find the file.

I mainly use plex_autoscan for the auto delete function so when I upgrade media, it can delete old stuff safely without wiping out my library :slight_smile:

It’s a solid tool, but if your library isn’t analyzed all the way, that’s usually the cause for the initial long scans.