Rclone 1.39 cache mount vs. plexdrive

Unless you want to modify a lot of your Sonarr/Radarr/Plex scripts so it’s more ‘scan friendly’, just stick with plexdrive and rclone.

Until there is an implementation of a persistent cache, it will rebuild based on the aging and you get quite some API hits.

@mechanimal82 You are correct in your suspicion. Cache should wrap the encrypted remote. The crypt wraps the cache. So what you will see in your rclone cache back-end will be the encrypted file names.

Ah… that’s a fair point. So because the cache is reset radarr, sonarr etc will initiate a fresh scan causing more hits on the google API and potentially leading to a ban.

May have to rethink as you suggest

Thanks… So when the cache function gets a persistent cache I know what to do (I may set the remotes up ready).

Thanks for your confirmation!

Any eta on when that might happen?

with 1.40 we are almost there - actually close enough IMHO (mostly due to gdrive changes polling)

I don’t have any 4K media. I haven’t tried that. Sorry.

I believe that crypt should be able to handle ranges now and should allow cache to properly read partial files through it. But I only believe, I want to properly test that before actually making that a certainty.

I’m also working on storing the frequency of reads for file data. That means the cleanup process will favour file data that is less read over more “hot” data. What that translates to: Plex scans will result in a persistent cache as long as the total cache size is enough for all the metadata scans on your libraries. Cache will then just prefer to clean data cached from playback.

But this after the 1.4 release

2 Likes

That’s great, having some sort of hotspot heuristics for the cache to keep a bulk of the plex scanning activity to as local as possible is a great idea!

This is great idea!

I can’t wait for this to be implemented. How can we know how much storage we should allocate for cache size though?
Any rough calculations based on rclone option/setting or total file size in gdrive?

Currently i allocate 25gb for total cache chunk size and it doesn’t seem to be enough at all.

I’ve been using 32GB, realistically it’s enough for a recent movie or two + two TV shows. Handy if you have multiple shared users who watch the latest hit show at once.

I reckon to have a decent cache (with the hot data stuff) you’d want 500GB to 1TB in local space to have a fairly seamless experience.

That’s too much. If you guess the metadata that Plex needs at 20 MB then for 1000 media entries (tv shows and movies alike) you could get away with 20 GB + 10 GB for playing stuff.

But the goal isn’t to cache everything idefinitely. For that you’re better off just copying everything locally which defeats the purpose.

I personally just have 20 GB for over 1TB of media. I could probably bump that a bit more but I left a beta version with the frequent data counts running for the past couple of days to see how Plex scans behave. Before I was constantly seeing cleanups deleting stuff which would suggest that the 20GB wasn’t enough for >1TB but at the same time there are playbacks that were shuffling the data too.

Yeah understood, and personally I’m using 32GB, merely emulating the common unionfs setups of old where people would have 1TB of locally stored files that would slowly go to backing store. Realistically I don’t see the need for anything bigger than 32GB, especially with the ranging improvements for partial caching and the like.

I’ve been using 25GB in my production server for over 2 months now and consistently have 5-7 concurrent streams.

Never had a problem and I’m using the Plex integration.

What’s the actual configuration you are using? I wouldn’t mind testing it out again.

1 Like

Hi,

so I just achieved a working config. However I am also one of the guys who is reading from one server (now with rclone cache instead of plexdrive) however I am also putting things ON GDRIVE from a second server. So how is this handled at the moment? From what I read hear this will simply not work and I would have to restart rclone-mount which is not only a PITA but will also rebuild the cache and hence might result in a ban? Is this assessment fair? In case it is: Is there an ETA of this getting any better?

I would LOVE to use rclone cache over plexdrive as rclone cache allows multiple workers and since I have two DSL lines I need multiple workers for both to be used concurrently…

I use it like you describe, but I use plex_autoscan to trigger a library scan on the plex VPS from the uploading VPS, just with a ten minute delay.

The latest betas will pick up changes on the remote but if you’re doing it one way, that is to say mounting plex as a --read-only rclone mount and doing all the changes on the other server it shouldn’t go too out of sync. Worst case you make sure both mounts are running the --rc flag and you can sent cache expiries manually to the mount like rclone rc cache/expiry remote="TV/Show/Season" for example.

Thanks. So is the cache persistent now? This thread suggests “no” while others suggest “yes”…

I had some stuttering this morning in a test run but cannot reproduce it. Will have a close eye on it. So far I like the solution better than plexdrive (in theory) as it is one tool less to worry about and I can use both DSL lines.

How does the plex integration work though especially with crypt enabled? What exactly does rclone ask plex, what is the answer, can it make use of the answer if using crypt (since then the cache is working on the crypt file structure which plex is not even aware of)?

If there are specific questions, this is somewhat an old thread as 1.40 was released and there were a number of changes in 1.40 in terms of the cache.

With plex integration, it works by checking plex to see if something is playing and if playing, it turns up the cache-workers for the file being played. This also makes it so scans and such only use 1 worker so it helps out with making sure you don’t get a ban.

If you had stuttering without seeing a config and what the bitrate and such, it’s hard to figure out what the issue might or might not be.

If you’re still on 1.39 it could be that the Plex integration doesn’t work anymore. I would suggest to update to 1.4 (or even better, some recent betas in 1.4)