Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

They would not be as they are different disks. You can only hard link from the same disk to the same disk, you can't hard link across disks.

cp /etc/felix@gemini:/gmedia$ cp /etc/hosts .
felix@gemini:/gmedia$ ln hosts blah
felix@gemini:/gmedia$ stat hosts
  File: hosts
  Size: 413       	Blocks: 8          IO Block: 4096   regular file
Device: 33h/51d	Inode: 11593293955630620861  Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/   felix)   Gid: ( 1000/   felix)
Access: 2020-07-23 07:31:23.520297057 -0400
Modify: 2020-07-23 07:31:23.520297057 -0400
Change: 2020-07-23 07:31:25.416311453 -0400
 Birth: -
felix@gemini:/gmedia$ stat blah
  File: blah
  Size: 413       	Blocks: 8          IO Block: 4096   regular file
Device: 33h/51d	Inode: 11593293955630620861  Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/   felix)   Gid: ( 1000/   felix)
Access: 2020-07-23 07:31:23.520297057 -0400
Modify: 2020-07-23 07:31:23.520297057 -0400
Change: 2020-07-23 07:31:25.416311453 -0400
 Birth: -

As an example. You can do something like that to validate hard linking is working and if that's an issue or not.

How can I have them on the same disk? I've used the same setup as you.

local = /data/local
mergerfs = /gmedia/

Perhaps some confusion on the workflow.

I write everything based on the mergerfs policy to /local in my setup. When a hard link happens, it happens on the same disk which is /local.

The only time I write to rclone is when I upload over night.

mounted  Movies  NZB  seed  TV
felix@gemini:/gmedia$ cp /etc/hosts .
felix@gemini:/gmedia$ ln hosts blah
felix@gemini:/gmedia$ ls /local
blah  hosts  Movies  NZB  seed  TV
felix@gemini:/gmedia$

So you can't make a hard link from /local to /GD as those cross disks just like I can't link between two physical disks.

felix@gemini:~$ ln /local/hosts test
ln: failed to create hard link 'test/hosts' => '/local/hosts': Invalid cross-device link

So when anything unzips, uncompresses, moves, it all happens on /local underneath based on my mergerfs policy which then allows me to hard link.

I think I'm doing the same too. I have pointed everything to /gmedia (mergerfs mount) and underneath it is being written locally to /data/local. However, that is not linking (inodes is different).

I'm just wondering how you are able to harldlink between:

/gmedia (mergerfs mount) and /data/local if these are different file systems?

You can't. That's the whole reason I use mergerfs as I link on /gmedia to /gmedia.

As I showed above, just do a simple test.

felix@gemini:~$ cd /gmedia/
felix@gemini:/gmedia$ cp /etc/hosts .
felix@gemini:/gmedia$ ln hosts test
felix@gemini:/gmedia$ stat hosts
  File: hosts
  Size: 413       	Blocks: 8          IO Block: 4096   regular file
Device: 33h/51d	Inode: 11593293955630620861  Links: 3
Access: (0644/-rw-r--r--)  Uid: ( 1000/   felix)   Gid: ( 1000/   felix)
Access: 2020-07-23 09:41:37.188261745 -0400
Modify: 2020-07-23 10:23:41.459891473 -0400
Change: 2020-07-23 10:23:44.619916692 -0400
 Birth: -

If that doesn't work, you have an issue somewhere else. If that works, hard linking works.

I will take a look and see what I'm doing wroing. Thanks again for your responses. Truly appreciated!

I have a seedbox that I use to download via usenet, and it also serves as my Plex server. I've been using your settings as a guide for my own setup. My movie/tv library is stored on my GSuite Google Drive, and your settings introduced me to the idea of using mergerfs to combine my Gdrive contents and my local seedbox storage. Recently, I've run into some performance issues that I'm hoping you (or anyone else) may be able to help me diagnose.

My seedbox slot is on an SSD, but I'm not seeing the kind of performance I would expect to see for an SSD. I specifically notice it when my NZB program is unpacking a large download. I had been using NZBGet, and it seemed like it was taking a really long time to unpack. I tried switching to SABnzbd, and I haven't seen an improvement. SABnzbd include a performance test feature, and my temporary and completed download folders are both reporting speeds between 55-90MB/s. From my understanding, an SSD should be faster than this.

During my attempts to diagnose the issue, it was suggested that the slower speeds were due to trying to write directly to a FUSE mount. It’s true that both my temporary and completed download folders are in a subfolder of my folder created using mergerfs. From my understanding, though, the way your settings are done should ensure that any writing to that folder should happen on the local filesystem (and not to the remote drive). Assuming I’m understanding that correctly (and used the correct settings), would the fact that I’m downloading to my merged folder still potentially be the cause of the slow speeds? Is there anything else I’m missing that could be causing this issue? Would I be better off setting the temporary directory to a different folder outside of my merged folder? The reason I put them all under the merged folder was to ensure I could take advantage of the hardlinking option in Radarr and Sonarr. Thanks in advance to anyone who can help me figure this out. I’m pasting the exact commands I’m using to mount my drive via rclone and my mergerfs command.

#!/bin/bash

#cache
screen -dmS gdrive rclone mount \
-vv \
--buffer-size 256M \
--dir-cache-time 1000h \
--poll-interval 15s \
--timeout 1h \
gdrive: /home/seannymurrs/gdrive \
&&

#mergerfs
mergerfs -o rw,async_read=false,use_ino,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true /home/seannymurrs/files:/home/seannymurrs/gdrive /home/seannymurrs/gmedia

Writing to a file on my mergerfs mount on a slow disk seems to produce pretty good speeds:

root@gemini:/gmedia# dd count=5k bs=1M if=/dev/zero of=/gmedia/test
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 6.84505 s, 784 MB/s

That's writing a 5GB file to the disk. I'd test and see what you get.

I've never noticed any performance hits with unrarring stuff, but I never watch it either. Those speeds seem very slow for SSD imo.

Is that the exact command I can use to run the test? Sorry for my ignorance, I’m kind of learning all of this as I go.

Figured out how to run the test. Below were my results. Assuming I did it correctly, the speeds reported are much higher than what SABnzbd is reporting.

seannymurrs@shuttle ~ $ dd count=5k bs=1M if=/dev/zero of=~/gmedia/test
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 7.61625 s, 705 MB/s

Did your settings changed with the new 1.53 version?

I am just asking because there were quite big changes coming with this release.

Yep, just letting things bake a little longer before I update as I'm using the new vfs-cache-mode full.

3 Likes

Your github doesn't update script?

Again, I haven't updated anything yet as I'm testing things out.

1 Like

Hi @Animosity022, I was wondering if you could clarify something for me.

I was looking at your rclone.service script and did not understand what to type under

# Please set this to your own value below
--user-agent randomappname101 \

Where would I find my user-agent value to substitute here?

Thank you in advance and thank you for all your hard work. I hope to hear back from you.

You can set it to anything you want as it’s just a name passed on and does not matter what it is.

1 Like

I saw that you updated your settings, how are these latest changes working for you?

Is there increased performance/faster load times since you adjusted your settings for 1.53?

Things work better for sure as a lot is on the cache now.

Load times for me were 1-2 seconds at most so it’s a bit hard to quantify as it probably changed so little.

1 Like

Maybe off topic (if so let me know an I create a new thread)

What exactly is on the cache? I mean how can I make sure my cache is filled with stuff which is going to be watched soon?

Is it for example possible that everything which is "on deck" in plex is always cached ?

I personally don’t do anything like that. I feel you could do that with some PlexAPI scripting and a few commands to grab the files.

I have a fast link and with the new cache and tuning you could do, I don’t think the need is the same.