Not sure how involved this would be, but would be nice to be able to mark a point in time worth of cache as persistent, e.g. not expire or be overwritten unless underlying source object changes.
E.g. create new mount with vfs cache set to 100G. Run operations to generate cache, let's say it generates 20gb. Run a command to freeze that cache so it's never expunged unless an underlying source object is deleted or changed. This could greatly reduce api hits to cloud providers for actions such as metadata scrapes or ffprobes.
I really like the idea of persistent cache, yet I'm really not sure how would it be best to implement it transparently.
Ideally we could pass a file/config with the paths we need to keep persistent. This would allow rclone to automatically cache and make sure they are valid every time it's mounted. The file would need to be able to support both folders and files so we can basically put important folders and individual files we need cached.
This would be amazing as it would save a lot of writes to those of us who are using SSD drives while keeping local those small files that affect performance the most.
With mergerfs / rclone setup, you can already basically do that.
You can just move what files to the cloud via filters/exclude/etc and it's all local then on the mergerfs local mount.
You can sync that local data to the cloud if you want to keep a copy of it, but I'd imagine it's just replaceable metadata anyway.
I guess the difference here is let's say the metadata is in the first 1mb of the file, but the file is 1g. Would be nice to run a metadata scan to pull that 1mb into cache for each file, then pin that cache if that makes sense.
So don't get me wrong, but trying to figure out what problem is being solved as it's a lot of heavy code lifting to make all that happen for what gain? With 1 billion API hits per day and the throttling that is done, you can't actually hit that number in a day even if you wanted to.
I assume your use case is streaming media like Plex/Emby/etc or is it something else?
Rclone works really well with Plex/etc now even before the newer cache mode which helps reduce any latency type issues that may occur.
Once things are scanned/analyzed, they are really hit again in Plex/Emby (I don't use Jellyfin so I don't want to lump that in without knowing for sure).
So don't get me wrong as I'm not doing trying to poo poo the idea as I think it has great merit and a good use case for things as well as I'm just not sure the coding effort vs reward is there but who knows as if enough folks like it, someone may pick it up.
That is one scenario yes. And I think I have a test case that works. With emby, it seems to be ultra sensitive to the slightest change. E.g. I merged a second mount in with similar root paths but only a handful of new media. Emby chose to see everything in the existing paths as new. I don't know if this is because the modify date on the root folder changed or what. So my thought was, is there a way to freeze that initial metadata in place.
So I ran a test with a fresh emby, fresh vfs cache, and fresh scan. Stopped mount, tarred cache dir, forklifted that to a new machine, fresh emby install again, and initiated scan. In monitoring nic traffic, it seemed to be next to nothing compared to the first machine. So I think this will work for my use case to create a tar after a fresh scan as a baseline.
The larger issue I'm trying to tackle is emby 4.5.4 to 4.6.4 upgrade. For whatever reason this upgrade seems to wipe libraries and thenemby devs said this is normal and a fresh scan is needed. I have a home emby server and one in the cloud, so this may lighten the burden of the upgrade. I guess I just needed a sounding board to work it out in my head. Thank you as always.
I have gone back and forth with Emby and my only challenge is the HDR/direct play stuff on ATV as it just is not quite there.
I kept my metadata (images and stuff) not with the media as I didn't tick that box on my libararies.
Emby should only ffprobe a file if the modtime/size changes which I don't see happen to me when I move from local to my google drive.
I did a full library scan with Emby for about 100TB in a few days which wasn't too bad. I just broke it down a bit and didn't have any issues.
I'm surprised to hear this (that moving from local to GDrive didn't force a rescan.) The kicker is the 4.6.4 upgrade causes a full rescan of everything.
AppleTV does struggle with direct play. I've found enabling mpv on the atv plus beta client is decent, but still not perfect.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.