Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Perhaps I could try an earlier version of mergerfs? Do you get times similar to me when doing seektest through your merged mount?

Yeah, my seek looks good but throughput is poop. Wonder what changed.

Kind of glad it's not just me... haha.

I'm trying to build an earlier version but I don't have root (as it's a managed server), so wondering what the best version to run is. I was thinking 2.25.0 when I can. What do you think? Could you have a look on your end?

I'm running out atm but I'll give a test once I get back home. Has to be the last version as I would guess.

1 Like

Looks like I have a solution on my end!

I jumped back to 2.25, purely because this is around the version I had working flawlessly in 2018, and it seems to have worked great. Tested plex really briefly and looks to be good again.

SEEKSPEED /home/ybd/google_media/tmp/100M.file
2019/05/27 19:02:16 File Open took 422.393µs
2019/05/27 19:02:17 Reading 170511 from 3997010 took 694.001363ms 
2019/05/27 19:02:17 Reading 459963 from 55931421 took 316.127894ms 
2019/05/27 19:02:17 Reading 643462 from 59171537 took 334.793426ms 
2019/05/27 19:02:18 Reading 578220 from 39190558 took 332.991471ms 
2019/05/27 19:02:18 Reading 280484 from 94963616 took 419.532255ms 
2019/05/27 19:02:19 Reading 350895 from 21454084 took 424.422117ms 
2019/05/27 19:02:19 Reading 614385 from 34699574 took 471.564755ms 
2019/05/27 19:02:19 Reading 300314 from 87272715 took 313.611748ms 
2019/05/27 19:02:20 Reading 285845 from 47057768 took 339.933436ms 
2019/05/27 19:02:20 Reading 625634 from 52633990 took 421.766166ms 
2019/05/27 19:02:20 Reading 67802 from 85725473 took 384.619917ms 
2019/05/27 19:02:21 Reading 401042 from 9333311 took 569.235575ms 
2019/05/27 19:02:21 Reading 445739 from 31415178 took 338.373307ms 
2019/05/27 19:02:22 Reading 85240 from 18272872 took 395.48336ms 
2019/05/27 19:02:22 Reading 923639 from 37014388 took 429.146706ms 
2019/05/27 19:02:22 Reading 1011576 from 39081944 took 18.306225ms 
2019/05/27 19:02:23 Reading 686863 from 96602623 took 348.403334ms 
2019/05/27 19:02:23 Reading 756044 from 27648203 took 394.625025ms 
2019/05/27 19:02:23 Reading 82935 from 30736283 took 304.91495ms 
2019/05/27 19:02:24 Reading 543634 from 13714932 took 394.550774ms 
2019/05/27 19:02:24 Reading 180882 from 66189515 took 369.399185ms 
2019/05/27 19:02:24 Reading 402243 from 72016420 took 313.61678ms 
2019/05/27 19:02:25 Reading 670611 from 81694652 took 312.183163ms 
2019/05/27 19:02:25 Reading 735972 from 28990319 took 419.515275ms 
2019/05/27 19:02:26 Reading 1025113 from 84507910 took 579.24738ms 
2019/05/27 19:02:26 That took 9.640986736s for 25 iterations, 385.639469ms per iteration
Finished in 0m:10s
FileSize 100M
SEEKSPEED /home/ybd/gmedia/tmp/100M.file
2019/05/27 19:02:26 File Open took 448.905µs
2019/05/27 19:02:27 Reading 170511 from 3997010 took 868.674457ms 
2019/05/27 19:02:27 Reading 459963 from 55931421 took 401.306631ms 
2019/05/27 19:02:27 Reading 643462 from 59171537 took 356.882963ms 
2019/05/27 19:02:28 Reading 578220 from 39190558 took 449.731266ms 
2019/05/27 19:02:28 Reading 280484 from 94963616 took 408.758776ms 
2019/05/27 19:02:29 Reading 350895 from 21454084 took 351.441812ms 
2019/05/27 19:02:29 Reading 614385 from 34699574 took 410.280156ms 
2019/05/27 19:02:29 Reading 300314 from 87272715 took 350.592753ms 
2019/05/27 19:02:30 Reading 285845 from 47057768 took 441.283151ms 
2019/05/27 19:02:30 Reading 625634 from 52633990 took 400.434319ms 
2019/05/27 19:02:31 Reading 67802 from 85725473 took 360.210532ms 
2019/05/27 19:02:31 Reading 401042 from 9333311 took 337.728739ms 
2019/05/27 19:02:31 Reading 445739 from 31415178 took 277.756915ms 
2019/05/27 19:02:31 Reading 85240 from 18272872 took 291.588536ms 
2019/05/27 19:02:32 Reading 923639 from 37014388 took 293.840987ms 
2019/05/27 19:02:32 Reading 1011576 from 39081944 took 19.692632ms 
2019/05/27 19:02:32 Reading 686863 from 96602623 took 415.004392ms 
2019/05/27 19:02:33 Reading 756044 from 27648203 took 352.367236ms 
2019/05/27 19:02:33 Reading 82935 from 30736283 took 349.169406ms 
2019/05/27 19:02:33 Reading 543634 from 13714932 took 385.610821ms 
2019/05/27 19:02:34 Reading 180882 from 66189515 took 403.066494ms 
2019/05/27 19:02:34 Reading 402243 from 72016420 took 522.877415ms 
2019/05/27 19:02:35 Reading 670611 from 81694652 took 499.330998ms 
2019/05/27 19:02:35 Reading 735972 from 28990319 took 358.606279ms 
2019/05/27 19:02:35 Reading 1025113 from 84507910 took 379.507234ms 
2019/05/27 19:02:35 That took 9.686597122s for 25 iterations, 387.463884ms per iteration
Finished in 0m:9s
FileSize 100M

I moved back to 2.25 and same results.

Great ok - at least it's working for now!

Do you hardlink into your gmedia directory from Sonarr/Radarr or do you just accept a copy?

It would be good to be able to hardlink instead of a copy for IO/storage space.

Maybe create an issue on the mergerfs github. So it could get analyzed and fixed in the future :slight_smile:

EDIT: i just saw you guys already did. Thanks :slight_smile:

Hardlink as that is the exact reason I use mergerfs.

Okay great - I have just put in use_ino. Would I be right in thinking if I do a ls -lLi I'll be able to see if it's a hard link or a copy?

You’d see it on the Sonarr or Radar logs. You can also just test a file by doing a ln command yourself.

Ahhhhh I think I know where I'm going wrong.

My torrents folder (that deluge downloads to) is outside of my local_media. Ahaaaaaaaaa, that's why you do that!

So does your torrent application point to your mergerfs mount too, just like gmedia/torrents/etc.

Yep. I’ll do some testing tonight.

I found this thread when trying to find a solution to my current Radarr/Sonarr/Plex installation on my Whatbox server. Up until now I've had an rclone cache mount connected to my Google Drive which the Plex installation uses as the location for all my videos. Radarr/Sonarr write to a local directory on the Whatbox server, and then a custom script is run after a successful download that moves the files from the local directory over to the Google Drive using rclone move. While this works, it doesn't seem like the best solution. I didn't know about mergerfs until I started trying to find a better way of handling things, so I'm still trying to wrap my head around it all.

Right now, I have a directory called gdrive which is my mounted rclone cache drive. I then have a local directory called Videos with two subdirectories (Movies, TV). These two subdirectories are where Radarr and Sonarr move completed files to. If I'm understanding mergerfs correctly, I would create a new "merged" directory that would list the combined contents of both my gdrive and my Videos directories. Am I correct so far? If so, is it this new merged directory that I now tell Radarr and Sonarr to move completed files to? If that's the case, does it just automatically know to actually write the contents of files to the local directory instead of trying to write to the rclone cache mount? Then, in theory, I would just need to set up some sort of automated script to run in the middle of the night (or day) to move the contents of the local Movies/TV folders to the Google Drive folders, correct?

If I'm correct in understanding how it should theoretically work, then I need to start figuring out how to actually get it installed and working on my Whatbox. Thanks in advance to anyone who can assist.

I'd suggest to start a new thread though as I'm trying to keep this related to any questions related to my settings.

At a high level, yes, the way I use mergerfs is I have a local disk that always is written to first, combined with a google drive encrypted mount. I use a script each night to move local content on that drive to my GD so it is all seamless to Sonarr/Radarr/Plex minus worse case ~1 minute polling time which something may not be there. In months and months, I've never seen that happen though.

Sorry about that. I posted here because I've pretty much been following your guide as it's the closest thing I've found as a step-by-step process for what I'm looking to do. I'll post a follow up question in a new thread. Thanks.

@Animosity022 This time I have question specifically regarding your setup/settings. You mention that you currently use a chron job to transfer downloaded content over to your Google Drive each night. Currently, I have a custom script set up in Radarr/Sonarr to transfer files immediately after they're downloaded. Is there an advantage to going the route you took over what I'm currently doing? I'm trying to decide if it's worth changing things around (I don't mind doing so if there's an advantage). Thanks!

So for my use case / setup, I use mergerfs/cron job back to my other post as I like to take advantage of the fact I use torrents and hard links.

Once a torrent finishes, it gets hard linked when done so it has no IO or extra space for being local and having two copies so I get the best of both worlds.

I move it overnight so the 2nd copy that Sonarr/Radarr/Plex sees gets moved up to my GD so it's transparent to that.

As for the torrent copy, that goes by my seeding rules and eventually gets removed. That way each copy is fully independent of each other and no extra space for having a copy.

If that use case makes sense for you as I run all in one box, I'd highly recommend it.

Ok, that makes sense. One advantage of your workflow over what I had previously been doing (and I didn't even realize this until recently) is the way I had been doing things was causing duplicates and/or errors in Sonarr when multiple episodes were being downloaded quickly. Each episode was triggering the script to run. When another episode would cause it to run again before the previous one had finished, it was causing issues. It didn't happen often, and until I started considering your workflow, I didn't even connect the dots to realize that's what was causing the problems.

Anyway, I've decided to go your route and have things transferred over to Google Drive once a night. Would you mind helping me figure out what needs to be adjusted in your upload script to make it work for my server? Yours seems to be more details and (I'm assuming) efficient than the one I've been using (pasted below for comparison's sake).

#! /bin/bash

# Upload to Google Drive
rclone move /home/seannymurrs/files/Movies/ gdrive:Movies
sleep 60s

# Remove empty directories
find "/home/seannymurrs/files/Movies" -mindepth 1 -type d -empty -delete

exit

I have a separate Sonarr script that is identical except it points to a folder called TV instead of Movies.

My local disk is /data/local where initially everything gets written to.

I use a script to move and it removes the directories when done. I exclude my torrents folder.

#!/usr/bin/bash
# RClone Config file
RCLONE_CONFIG=/opt/rclone/rclone.conf
export RCLONE_CONFIG

#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --drive-chunk-size 64M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs --user-agent animosityapp --fast-list
1 Like