Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Ah I see what you mean. The quality options are a bit different in every Plex client. On the TV apps, its Maximum Quality and a second Direct Play option on Auto or Force On. But on say Plex Media Player, you have Original Quality and a Direct Play option checked. With TVs I believe you can force Direct Play (for video that is, audio will likely be converted, except on SHIELD) by disabling the max bit depth (default I think is 4.2, but HDR exceeds that) and it will try to play the file without any transcoding on the video side.

You canā€™t force a direct play (other than by actually editing the profile for the device on the plex server).

Based on the profile, it will decide if it can direct play or not based on the media codecs.

You can force a direct stream by turning off direct play or you can also force a transcode by turning down the quality or starting it that way as well.

So I continue to tweak my settings but no matter what I do my scans still take hours to complete. I do have 3 GDriveā€™s that Iā€™m connected to. 1 is a folder share, 1 is my crypt and the last is a team drive. I use VFS for all 3 of these mounts. I have plex set to monitor the folders and to scan twice a day because of how long it takes to complete the scan. My crypt is just over 10T which as I understand shouldnā€™t be an issue at all.
Here are my mount commands

rclone mount gdrive_shared: /mnt/gdrive --allow-other --read-only --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --stats 1m --log-level INFO --log-file /var/log/rclone/rclone-shared.log &
rclone mount gcrypt2: /mnt/gdrive_crypt/ --allow-other --read-only --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --stats 1m --log-level INFO --log-file /var/log/rclone/rclone.log &
rclone mount tdrive: /mnt/team --allow-other --read-only --size-only --dir-cache-time 72h --drive-chunk-size 32M --fast-list --vfs-read-chunk-size 128M --vfs-cache-max-age 675h --vfs-read-chunk-size-limit off --buffer-size 1G --stats 1m --log-level INFO --log-file /var/log/rclone/rclone-team.log &

I am using a Hetzner server and I did a modification for Hetzner where I now have the custom kernel as well as using bbr in my network stack.

When I do start a movie some of them will start up almost instant while others will be about a min wait while it pulls down from Google. Watching the logs I can see the stream sometimes come in with 1-2Mb then it slows down to around 800K/s With me the stream is always in the 4Mb section at 720p. If I bump it up to the 1080 then it will sometimes buffer randomly. That is probably my own connection problems and not a issue with the server. Iā€™m just trying to get this to where it will complete a scan a lot faster than it currently does and itā€™d be awesome if I can get it to load a movie faster as well. Thank you for your help with this.

Iā€™m on Hetzner too. My Setup is build with Emby. Scans for the whole Libary takes 1:30h.
Since 3 rclone Versions my startuptime is much longer than befor (1.42)
On 4k Remuxes it takes about 8 sec. Normal 1080p releases takes 3 sec.

rclone mount:
buffer size = 1G
vfs chunk size = 32M
vfs chunk limit = 2G

Hope that helps youā€¦

1 Like

Thank you @Dual-O just put those changes in there and weā€™ll see what it does. :slight_smile:

I have to assume this is the best place to get the answers I need.
Been using plexdrive for a while since ACD was killed and I went to GD. Havenā€™t had many issues. Until recently load times slowed down scan times etc. And I all in all just wanted to be using one solution. Also seems like development plan old crapped out on plexdrive.

So I started looking at rclone and cache. I do use crypt for my gdrive. Maybe I should not but I am old fashioned.
So first I couldnā€™t find a version of rclone that would let me use the VFS commands.I finally found one on the forums from July, so I am not sure it is the latest one to be using. It does work, but what I am having is Initial Library scan is painfully slow. I am going on 16 hours of scanning.(If it is normal then ok) Other issue is takes about 20 to 45 secs to start playing something direct or otherswise. And when it does start 50% of the time it will play for a good 20 to 50 secs then buffer one more time and then plays fine after that. So that is a good sign.

So let me give you some env variables and how I have things setup and how I would like to keep them.
The box is a
Dual Xeon CPU E5-2670
64 GB of RAM
Free Memory at the time of writing this is

free -gh
total used free shared buff/cache available
Mem: 62G 12G 7.0G 25M 43G 49G
Swap: 89M 34M 55M

480 GB SSD Drive
Ubuntu 16.04 LTS
1gbs network connection.
usenet no torrent

I do use cloudbox for my stuff https://github.com/Cloudbox/Cloudbox/wiki
So I like the directory layouts etc.
Which are

/mnt/local -->> this is the dir all my downloads go
/mnt/plxcrypt -->> this is my decrypted gdrive mount
/mnt/local/unionfs -->> This is a combined mount of the above Local being RW, this is what Plex container points to for reading. and what sonarr/radarr also use for reading/writing.

I use plex_autoscan from ā€“ >> https://github.com/l3uddz/plex_autoscan
Which works well.

And for uploads I use -->> https://github.com/l3uddz/cloudplow
And this monitors my /mnt/local/ when it hits 200GB it sends it up to GD. It works. keeps my uploads down so I am not uploading a 720 followed by a 1080p 3 hours later.

so with that being said I would like to keep with the above if possible. Simple easy for me to deal with.
I know this is a lot to digest and give if I missed something I apologize. Now lets move on to my startup commands and .conf files etc.

Here is my rclone mount command.

/home/plex/beta/rclone --config=/opt/rclone/rclone.conf mount
ā€“allow-other
ā€“buffer-size 6G
ā€“dir-cache-time 72h
ā€“drive-chunk-size 32M
ā€“fast-list --log-level INFO
ā€“log-file /home/plex/logs/rclone.log
ā€“umask 002
ā€“vfs-read-chunk-size 128M
ā€“vfs-read-chunk-size-limit off
GDcrypt: /mnt/plxcrypt

And my rclone.conf. The remote called upload is just used for the cloudplow to upload to GD.

[GD]
type = drive
client_id = N/A
client_secret = N/A
token = N/A

[GDcrypt]
type = crypt
remote = gcache:
filename_encryption = standard
password = N/A
password2 = N/A

[gcache]
type = cache
remote = GD:/plxcptk
plex_url = http://127.0.0.1:32400
plex_username = N/A
plex_password = N/A
chunk_size = 10M
info_age = 24h
chunk_total_size = 10G
plex_token = N/A

[upload]
type = crypt
remote = GD:/plxcptk
filename_encryption = standard
password = N/A
password2 = N/A

I have noticed that for whatever reason my .cache stored under /home/plex/.cache is coming out listed as my encrypted names maybe this is normal.
And Also my logs under /home/plex/logs/rclone.log also display it all encrypted as well.

Again I hope I included enough info to guide me in what I might be doing so wrong, where I can improve to make this work better. If I forgot something again I am sorry let me know and I will get what I missed.

Thanks for any help that I can get on this.

Are you playing from the union mount? If so, you need to add the -o sync_read to the mount.

I also do not use the cache backend but just the vfs read chunk options which all the defaults on the latest is a great starting point.

Can you tell me where to get the latest? Since the version I found was from july. Any of hte other Rclones I canā€™t get a vfs command to work.
And yes I am playing from the union mount as well.

Sure.

https://rclone.org/downloads/

You would need the fuse option if you are using unionfs as noted in my previous post.

ok dang I feel stupid I was using 1.41 not 1.43 so I am getting it to mount now without prev errors.

Anything wrong with the command I am using for the mount? If it ok?
And of course should the library scan take that long seems to be taking forever?

And since I am encrypted I assume normal for my logs to contain enc names verus real names of files. correct?

I donā€™t use the cache backend personally and I use encryption as well.

Cache tends to be a little slower in my testing than using the vfs read chunk defaults. You could always try without that and see how it works.

If I fully scan a new library, it takes me 2-3 days for roughly 40TB or so. I use my own API key and have a gigabit link.

What do you mean your own API Key? For Gdrive? or for Rclone? And what does using your own API key grant you?
Also can you point me to your mount command that only uses vfs?

for my google drive as it gives you your own quotas.

All my info is posted at the top of this thread with links and a ton of information on what I use and why I use it.

:sunglasses:

Ok I am using my own API I believe I have my own gsuite account so I have to assume the same.

You would have to setup your own key and entered it in. Youā€™d be sure if you did those steps.

Yeah I have to imagine I did what you are talking about did it long ago with plexdrive etc and rlone on ACD days.


Just like the above.

Thank you for all the work you have out into this. I am going to try and tackle this next week on my Kimsufi server with little Linux knowledge.
Currently using unionfs as my mount and an old version of rclone.

For Plex do you use Scan my library automatically or Run a partial scan when changes are detected?

Iā€™ve just seen on your github:

--drive-chunk-size 32M

What role does this play when playing files? Thanks

I have the option to partial scan clicked in plex, but for movies, it scans the whole thing. I really just ignore it as itā€™s only a few API hits here/there.