Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

What are the mount defaults for vfs and such and why are you so comfortable with them?
Should they not be tweaked at all?

Most of all the defaults are listed here on the mount page:

There have been quite the number of folks that we have helped out / weighed in / made changes to improve the defaults so there is less to configure for most use cases. It seems like there is a large set of GSuite/GDrive/Plex users so majority of the defaults minus dir-cache-time work great, can be made a lot bigger since Google Drive polls for changes.

You don't have to, but it does speed up browsing and save some API hits in the long run (not like you'll ever hit API quota issues).

So when I first started to try to configure, there were quite the number of settings and the majority of them have become defaults over time and I'm always of the mindset to keep it simple and use defaults and the shortest command :slight_smile:

that all makes sense

i guess i'm always thinking there can be some tweaking done since i moved to a dedicated server with 8 cores and 24G of ram but playing and skipping through movies is still slow even when i use your default + a 1G buffer size and no drive cache, using your exact default mount.

I can't help but wonder if it's possible to have my plex play as if i were playing the file directly from google drive itself.

i feel like maybe my rclone mount is fine and any delay or slow skipping through is a plex thing?

i am using your 96h dir-cache-time as well

That's where it starts to get a little more complicated as there are many factors in figuring out what potential playback issue is going on.

The majority of my playback is direct play meaning it doesn't transcode anything so virtually no load on the CPU and it basically is just seeking the file if I move ahead of rewind.

If it's transcoding, you have to get the data and let the CPU do its work and give you the data.

Each player for Plex is a bit different in how it buffers and how it works so there really isn't a one size fits all setup unfortunately.

ahhh i see
one last thing and i think im just comfortable using your setup etc.

should i be on the latest beta or stable using your setup?

I'm pretty much a stable guy unless there is a specific reason for a beta version.

video and music take a while to start using your defaults. can you recommend a tweak?

It's probably best to start a new post and share all your information there and follow the question/bug template to capture all that's needed.

Animosity, thank you so much for your major contributions and continuing support to the community.

I have some Problems with my build and hope you can help! :smiley:

First, here is my setup:

--Ubuntu 18.04, i7/16GB Ram, connecting only through ssh.

--Google Drive <remote:drive "gdrive"> / Media <remote:crypt "gcrypt"> / Movies, etc.

--I am using custom API key, client ID, etc.

--my .conf and mount scripts are near identical to yours.

--notably, yours do not include a "vfs" setting, I assume this is because you defer to defaults?

## Problem 1 ##
uploading MKV from server local folder to gcrypt:movies (via rclone move command), I got the following entries in my mount log, over and over and over in this pattern:

2019/08/05 07:01:22 ERROR : movies/abc123xyz 2019.mkv: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
2019/08/05 07:01:23 ERROR : movies/abc123xyz 2019.mkv: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
2019/08/05 07:01:24 ERROR : movies/abc123xyz 2019.mkv: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
2019/08/05 07:01:25 INFO : movies/abc123xyz 2019.mkv: Copied (new)
2019/08/05 07:01:27 ERROR : movies/abc123xyz 2019.mkv: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
2019/08/05 07:01:28 ERROR : movies/abc123xyz 2019.mkv: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
2019/08/05 07:01:29 ERROR : movies/abc123xyz 2019.mkv: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
2019/08/05 07:01:30 INFO : movies/abc123xyz 2019.mkv: Copied (new)

## Problem 2 ##
PMS via official plex docker container has its /data folder mapped to the local /home/user/gdrive/gcrypt folder (setup through the docker run config). But, PMS does not see any of the subfolders in my remote:crypt mounted folder. gcrypt is mounted to /home/user/gdrive/crypt and it contains subfolders including movies, tv shows, etc. PMS does not see these subfolders, so it seems PMS is not seeing the decrypted contents of gcrypt. How can I debug this?

##Problem 3## I am running three systemd service scripts in the /etc/systemd/system folder, and none of them execute on startup:

(1) gdrive-rclone-crypt.service <this is near identical to your gmedia-rclone.service script, it mounts my gcrypt remote to /home/user/gdrive/crypt, user/group are my primary sudo user/group, i am not sure what to replace with the user where you had the value animosityapp>

(2) gdrive-rclone-nocrypt.service <this is near identical to your gmedia-rclone.service script, it mounts my gdrive remote to /home/user/gdrive/nocrypt, it is edited to execute AFTER (1) above, and it was edited to omit --rc because (1) and (2) can't both have this without triggering an error>

(3) gdrive-find.service <this is near identical to your gmedia-find.service script, modified to execute AFTER (2) above>

##Problem 4## plex docker container (via official plex container) does not start at system startup/reboot. I have to manually start it. Do you know what config file I can modify so this starts up automatically?

Problem 1

add --vfs-cache-mode to your rclone command

Problem 2

Most likely a permissions issue. make sure that your user running the plex container has access to the gcrypt

Problem 3 & 4

you probably didn't do systemctl enable "name of your service" which will turn on auto start service at startup.

you have a gd-find and a gmedia-find. i guess you forgot to delete one of them.
also curious where your plex_autoscan service is.

I did some clean up the other day to fix the names a bit as I was getting a little nitpicky. The find is really for my /GD mount so I fixed that and removed the other one.

I have no real use case for it anymore as the only thing I was using it for was an empty trash and I decided to do that just itself.

i see you moved on from a .mount to a .service for your mergerfs
any reason?

Seemed like I had a lot of questions in relation to the naming of the mount file as people had some unique combinations. I just moved it back to a service to simplify things if someone else was looking at it rather than keeping it a mount.

I'd argue the mount is easier if you have a more complex setup but they both achieve the same thing in the end.

ah i get it
i also added the --attr-timeout as well

Always able to learn from folks here too!

As @thestigma pointed me onto that as it helps with just reducing unneeded calls especially since all my action happens on a single machine uploading and I never get into the funky situation with corruption.

Although, even if I did, it's only media so I'd redownload it.

I just noticed your updates yesterday and updated my mounts to be a service file and also add the attr-timeout option for 1000h but today I began reading the rclone document on mount and it mentions corruption if changes occur outside of rclone. I have two separate machines running the mount. My Plex VM has the mount, and a VM that runs docker and has my sonarr/radarr/etc... containers. The Plex VM also runs the upload script. Should I lower that timeout or remove it altogether?

Also you say you no longer use plex autoscan but I had to set that up in order for new items to automatically appear in my library, do I have something setup wrong that Plex isn't automatically detecting the changes?

I'm also curious about usage of attr-timeout set at 1000h - I spotted this yesterday and it confused me.

@BinsonBuzz and @g4m3r7ag

The reason I added to my setup was the fact of how my setup is. I use a single machine and upload only to the remote and never change the files on the remote other than deleting them so for me to get into a situation of an attribute being cached and causing any corruption isn't a factor at all so it makes less load on the rclone mount since all those items are cached in memory. It's more a "micro" tweak and not something that makes a "macro" level difference.

That being said, if your setup is a mount only that writes once and just deletes, having a large value would help things.

Hope that helps.

I think I'm following, since the mount on my Plex VM is the only one that writes to the Google Drive (When the upload scripts kick off) Then my chances of experiencing a corruption is pretty low. The other machine would only be deleting from the drive if something is upgraded by radarr/sonarr, and since everything is encrypted I can't really manipulate anything through the drive GUI itself. However since it's a "micro" change I think for my own sanity I may remove it since I have multiple machines running the same rclone mount.

What about removing Plex Autoscan? Should Plex be able to detect the changes and scan automatically?