Rclone cache with plex leads to unresponsive server

#1

I decided to give cache a second go on version 1.46 and it immediately showed issues when creating a small library with only 4 TV shows. Plex became totally unresponsive to the point where the app showed the server down.

This is a mount of the following wraps:

crypt -> cache -> gdrive

Here they are, personal info obfuscated:

[gd10]
type = drive
scope = drive
service_account_file = /root/10.json
team_drive = xxxxx

[cd10]
type = crypt
remote = cache_gd10:
filename_encryption = obfuscate
directory_name_encryption = true
password = xxxx
password2 = xxx-xxxx

[cache_gd10]
type = cache
remote = gd10:
plex_url = http://127.0.0.1:32400
plex_username = xxxxx
plex_password = xxxxx
chunk_size = 5M
chunk_total_size = 4000G

The following message shows in /var/log/syslog:

message repeated 88 times: [ Sqlite3: Sleeping for 200ms to retry busy DB.]

Plex scanner log shows errors such as these:

Mar 22, 2019 16:57:38.787 [0x7f482eca9740] ERROR - Error issuing curl_easy_perform(handle): 28
Mar 22, 2019 16:57:38.787 [0x7f482eca9740] DEBUG - HTTP simulating 408 after curl timeout
Mar 22, 2019 16:57:38.787 [0x7f482eca9740] ERROR - HTTP 408 downloading url http://127.0.0.1:32400/library/changestamp
Mar 22, 2019 16:57:38.787 [0x7f482eca9740] ERROR - Exception inside transaction (inside=1) (../Library/MetadataItem.cpp:3353): Unable to allocate a changestamp from the server

These 408’s are likely related to the locked DB.

This doesn’t happen at all with a VFS cache mount.

After scanning the cache appears to work properly but I can’t have plex locking up every time it scans.

Any ideas?

#2

Your chunk size is awfully small so it’s going to lead to quite a number of API hits.

32M or 64M is a much better starting point.

If you enable plex integration, it’s going to only use 1 worker for scans and be slow as you only are using 1 worker.

What’s your mount command? What does the rclone logs say when the scan is happening?

#3

Thanks I’ll try that and will share my command tomorrow.