Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Why was this happening? Don’t you just hardlink the file?

Does anyone have NFO file generation enabled on Sonarr/Radarr? Any issues observed with that?

Just trying to see if there are any possible issues with this setup before trying it out.

Sorry if I didn’t explain that well.

When I just use rclone vfs/cache, Sonarr/Radarr have to fully copy files since you can’t use hardlinks. When that happens, Sonarr/Radarr start a copy with the partial on the end of it and it moves it. That would occasionally cause problems for me in my setup.

I wanted to have the ability to use hard links instead of having all that extra IO and copy time so I use mergerfs which gives me that ability with the combined local/rclone mount.

I don’t use NFOs but I can’t imagine that they would a problem. I use my subtitles (which is small files and very similar) and I have no issues with them.

I keep a couple key metrics.

  • 0 bans over the course of using my setup
  • 0 wife incidents with watching shows on TV not working

So for me, that’s 100% uptime.

re: subtitles, do you use Bazarr?

I don’t as I was just using SubZero in Plex. That being said, I’ll check that out as well.

Some questions regarding your scripts, if you don’t mind:

  1. What is the /data/mounts/local mountpoint that is present in the upload script ? No other reference to it is present anywhere. Should it be /data/local instead?
  2. Does /data/local have the torrents download directory too? If not, what is the directory for that and is it included in the mergerfs mount?

Yeah, probably better to start using Bazarr instead. The latest announcement from the subzero dev is that he will be merging with Bazarr from the next release since Plex has dropped plugin support.

That’s tough as Bazaar doesn’t do forced subtitles so that kind of stinks.

I did update my scripts and I unfortunately didn’t update the commit properly on my github.

I only use /data/local and /GD currently which is fixed now on my actual scripts.

I combine that all into /gmedia which /data/local/torrents exists, but is never copied up to my GD. By having my torrents and my items all in the same mount, mergerfs handles hard links and it all works fantastic.

First off, let me thank you for this excellent walk through and sharing your config. I took with I learned from other rclone tutorials and see what you have done with the cache mount. Looking for clear examples of cache mount is hard to come by since everyone is different.

One question I do have and where I am bit confused. You sound like you’re using the gmedia.service as your systemd service to kick off everything in order. However, in the git it is calling a dummy script.

Where are the rclone mounts happening? Is that in the gmedia.mount file? This may be an ignorant question, but I have struggled with systemd syntax and how it can be used for executing multiple services at startup/start/stop.

So the gmedia.service is my overall service. I use that so I can stop/start the group and it’s how you can leverage a group of services in systemd, which is why it is the dummy command.

It goes in order:

gmedia-rclone.service - this runs the rclone mount
gmedia.mount - this is my mergerfs mount, which combines my local disk and the rclone mount above
gmedia-find.service - this is just a find that gives me a quick and dirty file # count and ‘primes’ the cache. Not really needed, but I like having the counts in a log

In each file you can see the “After” and “PartOf” which groups and orders it in systemd.

Does that make sense or did you have any other specific question in the systemd files?

Thank you I see now… It is the PartOf, After and WantedBy. For this to work, all of the of these have to be added to multi-user.target.wants via the enable command?

The way I have it setup, only the gmedia.service is in the multi-user wants.

The others are enabled but part of the group gmedia.service:

felix@gemini:/etc/systemd/system/gmedia.service.wants$ ls -al
total 8
drwxr-xr-x  2 root root 4096 Oct 17 15:01 .
drwxr-xr-x 11 root root 4096 Oct 17 15:31 ..
lrwxrwxrwx  1 root root   39 Aug 25 16:48 gmedia-find.service -> /etc/systemd/system/gmedia-find.service
lrwxrwxrwx  1 root root   32 Aug 30 09:09 gmedia.mount -> /etc/systemd/system/gmedia.mount
lrwxrwxrwx  1 root root   41 Aug 25 16:48 gmedia-rclone.service -> /etc/systemd/system/gmedia-rclone.service
felix@gemini:/etc/systemd/system/gmedia.service.wants$

So it looks like I have it setup correctly. I must be missing something simple because when I start gmedia.service I see nothing happening. I will look at it some more tomorrow.

root@host:/etc/systemd/system# ls -l multi-user.target.wants/gmedia*
lrwxrwxrwx 1 root root 34 Oct 17 02:58 multi-user.target.wants/gmedia.service -> /etc/systemd/system/gmedia.service

root@host:/etc/systemd/system# ls -l gmedia.service.wants
total 0
lrwxrwxrwx 1 root root 39 Oct 18 03:29 gmedia-find.service -> /etc/systemd/system/gmedia-find.service
lrwxrwxrwx 1 root root 41 Oct 18 03:29 gmedia-rclone.service -> /etc/systemd/system/gmedia-rclone.service
lrwxrwxrwx 1 root root 32 Oct 17 03:06 gmedia.mount -> /etc/systemd/system/gmedia.mount

You should see something like:

felix@gemini:~$ sudo systemctl status gmedia.service
● gmedia.service - gmedia
   Loaded: loaded (/etc/systemd/system/gmedia.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2018-10-15 10:52:49 EDT; 2 days ago
 Main PID: 984 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/gmedia.service

Oct 15 10:52:49 gemini systemd[1]: Starting gmedia...
Oct 15 10:52:49 gemini systemd[1]: Started gmedia.

I’m like 99% sure I followed this for the setup of a grouped service:

http://alesnosek.com/blog/2016/12/04/controlling-a-multi-service-application-with-systemd/

This helped out! The piece I was missing was systemctl status gmedia*

I have two errors coming from two different scripts. It was well past midnight, so I put it on hold until this weekend.

Thanks!

I think I am missing something with vfs. Right now I am just mounting an rclone crypt from gsuite.

The mount works. I can stream files with start times that vary from 5 - 10s. I figure this will improve once the directory cache is filled. But here is where I am stuck, VFS seems to have to go out and query google every time I do a directory listing.

Is there no local cache of that directory structure stored on disk?

I am thinking of something like Plexdrive that keeps a local db and directory listings go through that instead of querying the cloud drive every time. Plexdrive is chewing up lots of CPU on my server, so I would like to switch to vfs, but need to get past this slow directory listing. Hopefully I am just missing a switch or maybe some concept of how vfs works.

Also, if I unmount vfs, does it have to rebuild the entire directory from scratch?

That does not seem efficient.

Edit: Bonus questions. Does VFS stand for virtual file system?

Here is my mount. TIA for any help.
/usr/bin/rclone mount gscrypt:media /mnt/vfs
–allow-other
–cache-dir /mnt/cache
–vfs-cache-mode writes
–vfs-read-chunk-size 16M
–vfs-read-chunk-size-limit 256m
–buffer-size 256m
–dir-cache-time 72h \

I use:

–dir-cache-time 72h - which keeps the file and directory structure in memory for 72 hours unless something updates it via polling and causes it to ask google again for the a particular directory

–fast-list - speeds this up

You may get one slow listing, but it should be quick after it is in memory. Without using cache, it doesn’t keep anything persistent and once it unmounts, you have to get a fresh listing.

An example would be a fresh listing vs a cached one:

felix@gemini:/gmedia$ time find . | wc -l
28488

real	5m12.826s
user	0m0.032s
sys	0m0.064s
felix@gemini:/gmedia$ time find . | wc -l
28488

real	0m0.164s
user	0m0.008s
sys	0m0.016s

Is possible a guide for noobs?

Thanks

1 Like

I’ve written a lot of my github to try to document, but not sure it’s really a step by step guide unfortunately. If I get some more time, I may try to add more.

1 Like

A question about the move script.

It seems like the main benefit (correct me I’m wrong) is that you avoid any transfer artefacts (like .partial). Would you be able to use the filter flag on the mount to avoid reading unwanted files and allow the mount to be be used directly (without the file system merger)?

My current setup is like this:

Sonarr/Radarr -> download folder (files kept for seeding)
Download folder -> copy to cache/encrypted mount
Cache mount -> eventually uploads to gdrive

From a practical sense though, I only need to manage one file transfer, download folder to my mount.

Unfortunately, with this setup, I have a 30-60s file play time on plex, and would like to try out having a direct mount to see if I can improve that time. So I’m looking to see if there are parts of the setup I can test without committing to all the moving parts. It seems like if I can use the mount directly, I can cut out some of the pieces (mergerfs and script), but I’m wondering if I’m being naive as to the benefits.

I use the mergerfs so I can use hard links instead of having Sonarr/Radarr copy which creates the double space and IO when files get completed.

I don’t think you can use a filter with a mount as it just makes everything available.

Even with the cache setup, it really should not be 30-60 seconds though as that seems a bit off. What’s your mount command you are using?

I really like to remove as many pieces as possible normally, but the hard links and the fact that I’ve touched my setup for months now makes me think the added mergerfs is worth the effort.

What’s. your client that is playing? There are a few bugs out there with Direct Play that might be causing you a problem too.

I use the mergerfs so I can use hard links instead of having Sonarr/Radarr copy which creates the double space and IO when files get completed.

Is that to avoid slowness, or is there another motivation behind avoiding IO?

I don’t think you can use a filter with a mount as it just makes everything available.

There seems to be something that allows filters on a mount. The documentation says:

Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

Even with the cache setup, it really should not be 30-60 seconds though as that seems a bit off. What’s your mount command you are using?

I’ve started another thread so as not to derail this one. 30s - 60s play start time using a cache mount and plex. It has all the details of my setup.