Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Not to mention the weird .partial~ files that it drops in, uploads, and then never gets the mkv! This setup is working GREAT on my home server but is balls for using on my seedbox with sonarr/radarr.

(I know what the partial files are.)

1 Like

You can use unionfs/mergerfs to just them locally and upload via a cron job with an exclude filter for the partials. Pretty much the same as writing directly to it.

I was hoping to move from move.sh, a service that keeps it running, and plexdrive to using rclone vfs outright without having to worry about it. All I did in the end was swap out rclone vfs for plexdrive and the rest is still running perfectly! Ideally the scripts would be disabled and archived.

I add these excludes to stop the weird files getting uploaded:

--exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*

Hmmā€¦ Why is my result looks so much different as if rclone never reads from cache?


real    0m9.094s
user    0m0.336s
sys     0m0.048s
root@jupiter:~# time mediainfo /mnt/Plexdrive/movies/a/A.Beautiful.Mind.2001.1080p.BluRay.x264.VPPV/A.Beautiful.Mind.2001.1080p.BluRay.x264.VPPV.mp4 | grep blah

real    0m8.877s
user    0m0.324s
sys     0m0.040s
root@jupiter:~# time mediainfo /mnt/Plexdrive/movies/a/A.Beautiful.Mind.2001.1080p.BluRay.x264.VPPV/A.Beautiful.Mind.2001.1080p.BluRay.x264.VPPV.mp4 | grep blah

real    0m3.847s
user    0m0.332s
sys     0m0.048s

Iā€™m using this options

          --allow-other \
          --allow-non-empty \
          --dir-cache-time=4h \
          --vfs-cache-max-age=24h \
          --vfs-read-chunk-size=40M \
          --vfs-read-chunk-size-limit 2G \
          --buffer-size=88M

Youā€™d either have to compile with the cmount option and use:

   -o auto_cache
   -o sync_read 

Or you could use a unionfs/mergerfs mount and pass those same options as well. If you man mount.fuse, that gives you auto_cache, which lets the OS use itā€™s memory for some file system cache:

auto_cache
This option enables automatic flushing of the data cache on open(2). The cache will only be flushed if the modification time or the size of the file has
changed.

Interesting. But i didnā€™t think that it would matter this much, because rclone vfs has layers of cache for folders and files, so without the use of those two choices it should read from cache.

Also the weird thing is i never see rclone cache big files only small files, for example :

root@jupiter:~# ls /root/.cache/rclone/vfs/GDrive1/movies/a/A.Bad.Moms.Christmas.2017.1080p.WEB-DL.x264/ A.Bad.Moms.Christmas.2017.1080p.WEB-DL.x264.en.srt

I never see rclone cache the actual movie file

Maybe thatā€™s the confusion because rclone doesnā€™t cache anything with vfs so you would be correct in not seeing it cached anymore :wink:

Oh okā€¦ So the only way to get something like mediainfo to read from cache is the two options you mentioned above then?

I kind of donā€™t want to compile or going back using unionfs just for the sake of saving 9 secondsā€¦ :smile:

So think of it like this, it really wonā€™t matter much unless you are scanning a library. Itā€™s a bit of effort if you are just using a direct mount, but if you are already using unionfs or mergerfs, itā€™s easy to add those 2 options.

Building really is simple as I just this lately when a release comes out:

#!/bin/bash

export GOBIN=$HOME/go/bin

cd
go get -u -v github.com/ncw/rclone/...
go get -u -v github.com/ncw/rclone

cd ~/go/src/github.com/ncw/rclone
go install -tags cmount

That installs the rclone binary to my GOBIN and I just run from there.

Iā€™m ok with compiling, but I donā€™t want to because currently I have my solution very simple official rclone beta that is automatically updated to latest version with rclone move for four upload. Thatā€™s it. I used unionfs when i use plexdrive but i like where things are simple.

Maybe @ncw will consider merging this feature to future rclone release? So that user doesnā€™t need to compile or use unionfs. :slight_smile:

In your systemd mount file? I suppose I could merge the scriptā€™s settings into the mount to cut the script out and let the mount manage all uploads, thanks for the idea!

Though how are you handling hardlinking from Sonarr/Radarr? Copy causes heavy io and Iā€™m using a unionfs mount at the moment to merge a local dir to plexdrive with a script to upload. How could VFS replace that? Swapping it out for plexdrive+script says that hardlinking isnā€™t supported whichā€¦ I think just answered my own question?

So you guys got me thinking a little bit.

mergerfs does support hard links when you write to the mergerfs file system.

That being said, I made a directory in my /gmedia that has my downloads / torrents that Iā€™ll never sync to my GD and only keep it local.

That way I can hardlink in Sonarr/Radarr directly on my /gmedia rather than crossing file systems.

Still doesnā€™t work if you are only using rclone, but if you were, you couldnā€™t hard link anyway :slight_smile:

In my rclone move job - I have a unionfs mount for a local view, and then a rclone move script to move files to the cloud.

Iā€™m using unraid and I had hardlinking working by mounting the unionfs mount at the same location as my user shares e.g. /mnt/user/unionfs and then mapping /mnt/user to sonarr/radarr. However, I ran into problems as youā€™re not really supposed to mount things at /mnt/user even though other users seem ok and Iā€™ve reverted to /mnt/disks/unionfs.

Yes, Iā€™ve lost hardlinking but I think the one-off io hit is worth it, to save any ongoing io with the file moved off the system

Edit: Iā€™ve realised if I map /mnt/ then hardlinking should work between /mnt/disks and /mnt/user. Will give it a try

Thanks for all the information in this thread. I tried searching, but didnā€™t have much luck and apologize if this is a redundant question.

Is the newer way of mounting using vfs with either mount of cmount compatible with encfs encryption?

I am a current pd user and while I have no issues at all using it, out of curiosity, I wanted to give this a shot just to try it out.

However, I am faced with issues in that I can get the encrypted remote mount, but cannot get the any of the others mounted such as the remote-decrypted, local-encrypted, local-decrypted, and a unionfs mount.

Are there any obvious things that one could try?

edit: The more I think about it, I canā€™t imagine a scenario of why encfs would not be supported as it is just an encryption layer. I must be doing something incorrectly. Will have to keep checking.

I personally have never used encfs. Iā€™d suggest to make a new thread and share all the info and what you are trying to do and what errors you are seeing and Iā€™m sure we can help out.

I agree with you as I donā€™t see based on what I know, why it would not work.

Thanks for sharing this. I was able to get plex up and running on digitalocean with encrypted media on google drive in no time.

Just to share my experience and a couple of pitfalls for others who may want to try the same.

On a brand new ubuntu bionic install, you need to install the packages git, golang-go and libfuse-dev to build the ncw rclone.

Then you need to create the mount folder (and chown it with your user) and the logs folder (and perhaps touch the log file, I created it along with the folder before testing) before you can run the service file.

Other than that everything worked fine.

One question to the OP: What is the script ā€œ/home/felix/scripts/GD_findā€ supposed to do?

Thanks

EDIT: Also had to edit /etc/fuse.conf to enable ā€œuser_allow_otherā€ before enabling the service

That just launches a ā€˜find /gmedia &ā€™ in the background to ā€˜primeā€™ the directory. Not really needed but I like to have the cache there :wink:

@Animosity022, how do you deal with new files getting added remotely? I assume that youā€™re running this off of one box and might not have an answer. Iā€™m still using vfs and tried to use the cache mount on top of it to get the best of both worlds (quicker scanning, cache, etc) but when new files were added by my seedbox my local mount wouldnā€™t pick up the file. I tried editing my rclone.conf to have a low info_age (5m) and used --dir-cache-time 48h but new files wouldnā€™t get picked up. Am I stupid? :smiley:

I have it all on one box and use an rclone upload script overnight.

ā€“poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)

Regardless of the dir-cache, it should poll for changes and expire stuff so it shouldnā€™t be more than 1 minute for new items to appear.