I decided to give cache a second go on version 1.46 and it immediately showed issues when creating a small library with only 4 TV shows. Plex became totally unresponsive to the point where the app showed the server down.
This is a mount of the following wraps:
crypt -> cache -> gdrive
Here they are, personal info obfuscated:
[gd10] type = drive scope = drive service_account_file = /root/10.json team_drive = xxxxx [cd10] type = crypt remote = cache_gd10: filename_encryption = obfuscate directory_name_encryption = true password = xxxx password2 = xxx-xxxx [cache_gd10] type = cache remote = gd10: plex_url = http://127.0.0.1:32400 plex_username = xxxxx plex_password = xxxxx chunk_size = 5M chunk_total_size = 4000G
The following message shows in /var/log/syslog:
message repeated 88 times: [ Sqlite3: Sleeping for 200ms to retry busy DB.]
Plex scanner log shows errors such as these:
Mar 22, 2019 16:57:38.787 [0x7f482eca9740] ERROR - Error issuing curl_easy_perform(handle): 28 Mar 22, 2019 16:57:38.787 [0x7f482eca9740] DEBUG - HTTP simulating 408 after curl timeout Mar 22, 2019 16:57:38.787 [0x7f482eca9740] ERROR - HTTP 408 downloading url http://127.0.0.1:32400/library/changestamp Mar 22, 2019 16:57:38.787 [0x7f482eca9740] ERROR - Exception inside transaction (inside=1) (../Library/MetadataItem.cpp:3353): Unable to allocate a changestamp from the server
These 408’s are likely related to the locked DB.
This doesn’t happen at all with a VFS cache mount.
After scanning the cache appears to work properly but I can’t have plex locking up every time it scans.