Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Just sent you a PM with the link to the full log. I haven’t changed paths, and Emby should be done with scans already (the movies I see coming by have been in the library for a while).

Hmm.

So the ban looks to be already there on the very first line of the log so that doesn’t show much other than it isn’t over yet.

I’d be super curious if Emby is doing something as I’ve stopped using Emby some time back and only use Plex these days as having both provided no value for me.

The other thing I can see from the logs is that Emby is definitely doing a ffprobe against the files so you can them sequentially opening and close a few times.

You should see that via the Emby logs though as well. I forget what the exact log is but you can see a “ps -ef | grep ffprobe” to validate it’s actually analyzing the files.

Thanks a lot for looking at the huge log :smiley:.

I checked all the Emby settings and it doesn’t seem to have deep analysis like Plex has. It has a real time monitoring and only seems to monitor changes. So I kept looking at saw one of my dockers running “Bazarr”. Which if you don’t know it, automatically downloads subtitles based on Sonarr and Radarr.

It also does full library scans to see if it’s missing anything. I would expect it to look at the cached dir but it seems to cause all the filling of the log and api-hits. I’ve just disabled the docker and my api-hits have pretty much stopped and the log doesn’t fill as fast anymore.

Will see what it does overnight with Bazarr out, but it’s definitely a bummer since it’s a very useful program.

Oh, interesting. I’ve seen Bazaar but not sure how it works as I don’t use it since it doesn’t do forced subtitles.

What do you mean with forced subtitles? And I expect you use the Subzero plugin in Plex then for subtitles?

If a show is in English but there are parts of the show that are not in English, those are called forced subtitles.

In SubZero, it looks like:

image

You have 50TB of data for Plex? Hmm… I have well over 150TB+, my movie collection is roughly around 10k movies, well over 700 TVShows. My anime collection alone is over 300k episodes. If you include documentaries, I’m sitting well over 500k files for just Plex. I’m not even close to being done, I have another 20k Movies to obtain, 1,000+ TVShows, Documentaries, and Anime is constantly releasing so I’m adding 10 - 24 episodes per series. I’ll try your configuration tonight. I’ll clone my VM, and fire it up under a different IP and see how that works or switch the configurations.

I’m using;
Sonarr
Radarr
Lidarr
Plex (TVShows, Movies, Documentaries, 4K Movies, 4K TVShows, Anime, Music Videos)
AirSonic/SubSonic (Music)

Hi mate, Rclone purring along great now with all your help thanks.

Data usage is still higher than Plexdrive, so trying to work that issue out.

What program do you use for the above?

For my router, I use OPNSense and the plugin is VNstat.

https://opnsense.org/download/

It’s like PFSense, but better :slight_smile:

Ok great thanks- looks like you can install on Linux also. Will give that a shot. Thanks

Hello i could use a little help please, I’m on Ubuntu 18.04 server with latest rclone.

I have setup a google drive remote mounted at “/library” and then set up a cached remote mounted at “/library-cache” this cache points to “gdrive:” in my config. i point sonarr and radarr at the mounted “/library” and point plex at the mounted “library-cache”.

Its just so slow with regular crashes from rclone and a very long waiting time to play a movie, thats if it starts at all.

my config

[gdrive]
type = drive
scope = drive
token = {“access_token”:“blah”}

[gdrive-cache]
type = cache
remote = gdrive:
plex_url = http://127.0.0.1:32400
plex_username = blah
plex_password = blah
chunk_size = 50M
info_age = 1d
chunk_total_size = 100G

mounting with these commands

rclone mount --allow-non-empty --allow-other --umask 000 --uid 1000 --gid 1001 gdrive: /library &

rclone mount --allow-non-empty --allow-other --cache-db-purge gdrive-cache: /library-cache &

many thanks

Agreed. I also use Glances for all of my servers. I’m working on a web portal for them.

Also, I just realized You’re a 5FDP fan. Hell yeah! Metal Heads! I’ve been a fan since The Way of the Fist dropped.

1 Like

Check my post for my settings. If you want to use the cache, I’d start a new thread.

Hell yeah! Seen them many times!

thanks for the reply, i will have a good read through tomorrow. i dont mind if i use a cache or not, just want to avoid api bans.

Do you got localmedia assets turned on in the scan Agent? I’ve had the same problem and after much debugging found out that the localmedia scanner will check the whole serie folder after a new episode was added. This also resulted in bans for me.

Interesting. I do have these turned on for my TV Library. Not sure why.

I haven’t had any bans since I updated RClone to the latest version. But I’ll turn off local media assets just in case. Can’t hurt to have less API hits and quicker library scans.

I’m running multiple instances of Plex so when those would scan the new episode it would go quick and I would get a ban for “download limit exceeded for file” in a normal setup it’s more so it’s not unnecessary scanning the whole serie which should result in less api calls.

I think localmedia assets is enabled by default but I’m not sure. I thought it was needed for subtitles but when playing media Plex will do a quick check and update the available subtitles anyway. So guess it’s only for local posters and stuff.

1 Like

I tried looking at your settings @Animosity022 and using them for what I could.

Two things I don’t understand, why do you use the rc and enable remote control? And is there no other way to do the vfs/refresh other than having rc enabled?

I run everything on a Kimsufi server, and don’t want to open up anything unneeded, but it might be I’m missing the point here? Can’t I fill the dir-cache some other way or run the vfs/refresh directly, without the rc?

Mount cmd:
/usr/sbin/rclone mount GkcryptNoCache: /home/jsd/plex/acd-sorted
–allow-other
–buffer-size 256M
–dir-cache-time 72h
–drive-chunk-size 32M
–log-level INFO
–log-file /home/jsd/log/rclone.log
–umask 002
–vfs-read-chunk-size 128M
–vfs-read-chunk-size-limit off
–retries=5
–timeout=30s
–low-level-retries=3
&

Yes, you can just do a find . on the mount point.

The only reason I really do it as it’s a little faster it can use fast-list.

All in all, it’s not a showstopper by any means as I used to use a find.