Plex Optimize Help

After reading through the various forum posts I found out that the Plex Optimization feature won't play ball with a vfs rclone mount (gdrive in my case) because of the way Plex handles optimization. ([SOLVED] Can't optimize plex on rclone mount)

It basically creates a .inProgress folder where the source media is stored (However, this is customizable to an extent, see below), creates a lot of tiny temporary files which it closes but expects them to be available for writes etc. In a standard rclone mount those files get uploaded and plex then somehow fails to create a final .mp4.

Solution for which I need help:

I can tell plex where to store the optimized files. It needs to be a folder within your library but not necessarily the original file folder.

Basically my idea is as follows:

  1. Keep my rclone vfs-cache-mode writes mount for all original media files (gdrive/media)
  2. create a second folder on gdrive (gdrive/optimized).
  3. Mount such folder through a second rclone mount, whereas this second rclone mount:
    a) Is the folder Plex is told to optimize to
    b) rclone is told to keep any writes to this folder for 10 hours local and only upload it afterwards
    c) still allows plex read access to the optimized versions it expects to find there

Could it be as simple as setting up a second rclone mount with a cache, specifying a wait time of say 10 hours, which should be enough for any optimization to finish? Or am I missing something? Any better ideas?

Best regards

You could probably do that, but it sounds a bit more complicated than you need. Also, can you tell Plex to use folders that are outside of the main directory for optimization? (I remember trying that, but without success)

What about mounting a Union remote, and use a script to upload files that are 10+ hours in age?

The key here is - as you've correctly identified - that frequently accessed temporary and unfinished files perform badly or may even fail entirely if they get sent out to the cloud straight away. Best-case they get ping-ponged back and fourth multiple times being incredibly slow and inefficient. Worst case it jams the software that is using the temp-files and causes it to be confused and error.

This often can happen in creation of "composite" files like video-encoding, or torrenting and a few other cases like that. We basically just need to make sure the software gets a little bit of time to finish working with temporary files before we start uploading. The reason it fails here is probably because rclone grabs the files and starts using them (to upload) and meanwhile Plex is panicking because it can't modify that file anymore - eventually timing out and giving up.

If the cache had an upload-delay (or even better, waited until the file had not been accessed for more than X minutes before uploading) then that would probably solve most of these problems. Maybe that is worth making an issue about? Because it may not be a hard thing to implement as an option. EDIT: Actually, now that I think about it, Nick has told me he has plans to implement "temp upload" for the VFS cache, so that would be solving the same issue. Don't know the timetable on that though.

As for existing workarounds, here are the ideal ones I know about:

  • If your software supports a "temporary directory" for such unfinished files then that is the most perfect solution. In that case, just set the temp-dir to be local and the software should move the file(s) to cloud once they are actually done. For torrents this is an option in Qbittorrent that I use (as well as several others). It is also not that uncommon for many encoding programs to have some kind of "scratch disk" like this. Even photoshop has an option for this. I have no idea if Plex specifically has this option - but definitely worth checking and also remembering for similar future problems you might encounter.

  • Another option is to use Union (rclone) or MergerFS (Linux) such that your "uploaded" files actually get spooled into a local storage location even when using the mount. ie. a cloud-read, local-write merged drive. From there you will have full control of the upload via a recurring script. Either do your uploading overnight (if there is no activity then), or make your script check the access-time of files and only upload those that have "gone cold" (a more robust and elegant solution). This also has the added benefit of making "uploads" appear to be incredibly fast since they will work at the speed of your local HDD. Once the data has arrived in your "uploads" you can just stop caring about it - since you know your upload script will just take care of it on it's own.

MergerFS works great for this as-is. A similar Union-setup will work but has a couple of annoying limitations still (you basically have to perform any deletions or moves on a separate remote). Me and a few others have set up a fairly detailed issue on this and Nick has said he will take a serious look at this for one of the next versions after 1.50 - so this will very likely improve quite soon. After that happens Union should have most of the important functions you can get from mergerFS (except a few minor things like hard-linking support which is not directly relevant to this use-case).

Hopefully that gave you some good ideas on how to proceed. Let me know if you need more spesific info about one of the suggested options after you decide what you would like to try.

Thank's a lot for your thoughts and suggestions!

Unfortunately Plex cannot store those optimizations temporarily on a local drive and move them afterwards. Even a manual move after the creation is not possible as it "breaks" the Plex database. Plex basically creates the optimized file at its designated location and expects it to be there. The path is stored in the sqlite db of plex and if the file is moved Plex won't find it.

You can tell Plex, where to store the optimized file (independent of the file location of the original file) as long as the folder you designate is included in the Plex Library Folders.

The Union FS solution suggested by @thestigma sounds great, however it is not ideal for my purpose.
I still want some files to upload instantly. I basically have the following workflow:

MacMini with Sab, Sonarr, Radarr, Rclone.
Rclone mounts my gdrive root to Volumes/gdrive

  1. Sab downloads (temporary folder and final folder on local disk),
  2. a Sabnzbd post processing script moves the completed, extracted etc. files to gdrive/UploadDump
  3. Radarr or Sonarr respectively watch gdrive/Uploaddump and move the files to their final destination (gdrive/Media), notify plex to initiate a library refresh
  4. PMS on a different machine on the same network has access to gdrive/media over smb (I know it is not optimal but it works reliable).

I still want the instant upload from sabnzbd after finishing its downloads, so new stuff gets picked up shortly after the downloads finish. (Upload speed is roughly 500mbit, so speed is quite ok). Therefore combining a merger-fs "local upload dump", with scripted delayed general uploads seems not optimal.

I basically need only one folder on gdrive with delayed uploads and thought that might be possible to implement via a second mount? Reworking the folder structure: e.g. rclone mount 1 has gdrive/Media as root. In Gdrive/Media we have all the media files and a upload/dump. rclone mount 1 with mount location /Volumes/gdrive gets shared over smb to PMS.

Rclone mount 2 points to gdrive/delayedupload as its root folder gdrive/delayedupload stores all the optimized versions and PMS has also access over smb to this mount?

Would this work with the following commands for rclone mount 2

--cache-tmp-upload-path=~/rclone/tmp_upload
--cache-tmp-wait-time=600m \

assuming all optimizations are finished within 10 hours. Or is there maybe even a way to do this with a vfs-cache mount?

Best regards

Oh yes, it is totally doable to only have a spesific folder-structure be a delayed system, while everything else goes straight to the cloud. You could make this appear as 1 drive using union or mergerfs.

Or even simpler may be to have 2 full remotes, one that uses a delayed upload via local as described, and another that goes straight to cloud. They can both give access to the same files - but you use each as appropriate depending on if a given app creates these sorts of temporary files (torrent, plex optimize ect.)

The filtering system may also potentially be useful in guiding upload scripts as you can then designate them to ignore any spesific filename patterns (assuming the temp-files are named in some spesific way, which they often are).

Be aware that these parameters are for the cache-backend, not the VFS cache.
That said, this should probably work ok. Haven't tested this specifically in terms of temporary files, but since files remain accessible locally while in the temp-folder this should work in theory.

Would this be possible to just do with the VFS cache? Yes, no doubt, - but it would need an option to delay uploads until they have not been accessed for X minutes. That would just solve it the most easily and directly without any need for a more complicated workaround. I know that this is planned and have discussed such features with Nick in the high-level abstract, but I don't know if a real timeframe has been decided for it yet. So many good features to implement, so little time :wink:

This change probably isn't too difficult in essence, but it might be the sort of thing that makes the most sense to implement after some more basic planned VFS improvements are ready - like cache persistence and cache metadata.

@ncw Do you want to comment in terms of your plans in the next few versions?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.