SOLVED: Rclone cache can't connect to plex

In my cache logs I have:

2018/06/15 22:17:14 INFO : cache: Connected to Plex server: https://192.168.30.90:32400
2018/06/15 22:17:17 ERROR : plex: websocket.Dial wss://192.168.30.90:32400/:/websockets/notifications?X-Plex-Token=REDACTED: dial tcp 192.168.30.90:32400: connect: no route to host

happening all the time. Anyone know how to fix? I’m wondering if this is why my playback is so slow - takes around 20+ seconds to start Guide to replaceing plexdrive/unionfs with rclone cache

Thanks in advance for any help

Gotta be on a later beta version as the currently release only does HTTP:

I’m on the latest beta. Thanks for finding the ticket. I’ll post in it to see if related, if not I’ll raise a new one

That is kind of indicating a networking problem. Can you ping 192.168.30.90 from the machine you are running rclone on?

Thanks Nick. You’re right - I can’t ping the docker from the server that’s running it. My plex docker is running in a VLAN and I’ve learnt that there’s some weird macvlan thing where dockers not in a vlan can’t communicate with dockers that are in a vlan, but I never imagined that applied to localhost as well.

Will investigate to see if I can fix. I’m hoping this might explain my slow media starts. Will share if I learn anything.

1 Like

It sounds like the issue is that rclone is not running on the same network as the Plex server. Are they?

Later Edit: You already confirmed that it’s not. And yes, without talking to Plex cache will not increase its workers causing slow reads. I would disable the feature completely until the connectivity is restored.

Thanks remus. I’ve tried disabling the plex integration and it doesn’t speed things up. Really stumped

What’s your config/mount command?

rclone mount --allow-other --dir-cache-time=1h --cache-db-path=/mnt/cache/ssd/rclone --cache-chunk-size=10M --cache-total-chunk-size=30G --cache-info-age=2h --cache-db-purge --cache-workers=10 --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time=60m --buffer-size 0M --log-level INFO gdrive_media: /mnt/disks/google_media

[gdrive]
type = drive
client_id = REDACTED.apps.googleusercontent.com
client_secret = REDACTED
scope = drive
root_folder_id = 
service_account_file = 
token = {"access_token":"REDACTED":"REDACTED","expiry":"2018-06-14T19:04:01.421796372+01:00"}

[cache]
type = cache
remote = gdrive:crypt
plex_url = http://192.168.30.90:32400
plex_username = Binson_Buzz
plex_password = REDACTED
chunk_size = 10M
info_age = 48h
chunk_total_size = 32G
plex_token = REDACTED

[gdrive_media]
type = crypt
remote = cache:
filename_encryption = standard
directory_name_encryption = true
password = REDACTED
password2 = REDACTED

The guys over at unRAID have told me what’s wrong with my network settings. Just waiting for the family to go to sleep so I can reboot the server

1 Like

Plex connection issues resolved.

With a few tweaks, I’ve got my launch times now down to around 10-12s on average which is acceptable for me as I know what’s going on behind the scenes - will keep an eye on the WAF! Everyone who’s getting faster speeds I think are using VPS

Adding --cache-chunk-no-memory (realised this keeps the chunks in memory and not caching them, rather than saying to not use memory - duh!) seems to have helped.

I played with smaller and larger chunk sizes, but they made 5M and 15,20M made things worse.

I bumped workers up to 50 when I read this was a ceiling, not the amount always used. I’ll increase dir-cache-time and --cache-info-age once my background rclone move job finishes moving content

rclone mount --allow-other --dir-cache-time=1h --cache-db-path=/tmp/rclone --cache-chunk-size=10M --cache-total-chunk-size=8G --cache-info-age=2h --cache-db-purge --cache-workers=50 --cache-chunk-no-memory --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time=60m --buffer-size 0M --log-level INFO gdrive_media: /mnt/disks/google_media

50 cache-workers seems bad as for each file that you have playing, it is opening 50 workers so that’s going to rate limit you.

Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. Default: 4

If I’ve read this right if Plex is linked then this is a ceiling and i.e. is adaptive not fixed - hopefully I’ll only hit this on very rare occassions when there is high concurrent usage (not on my server) so it shouldn’t affect me

The quota for queries per second is 10 unless you’ve gotten insanely lucky and convinced Google to upgrade it.

image

If you have a 3 people playing movies and if it even upped to 10, you’d be pushing 30 per second as they run concurrently.

Example of the API errors when it’s high. I was on 8 during that test config and had a few test files going.

image

Also, if it’s too high, you’d end up waiting for worker 10 to give you a chunk back. It depends on your CPU/Internet/etc. There is a sweet spot of workers and your setup. Higher is not always better.

Ahh, I think I’ve experienced that - every now and then a file takes an eternity to launch. Going back to 6 until I have more time to test.

Thanks