Events subscribe system


i thought of this idea as i examined my rclone system.

Right now I use rclone as backend for my plex media storage. which works well.

my original setup used to be the usual gdrive > cache > crypt. However recently I've discovered that not using cache backend actually improve the streaming speed by a lot. used to shutter on 1 1080p stream.

That worked well until i remembered that i have modified plex_rcs script that watch for cache expire events to trigger localized plex scan on folder. without having cache the script broke down as it's cant really tell if there is new files that are added or removed to trigger the scan.

so i naturally made two mount points /media /cache and had plex_rcs watch /cache for changes and then trigger updates on /media. thats also works somewhat nice. but it's clunky and have to use regex to parse rclone output and sometimes it's difficult to filter for items.

so i came up with this idea.

wouldn't it be possible to have rclone publish events via rc/ or for example redis that discribe changes to file system, for example.

        "event": "add",
        "metadata": {
            "type": "folder",
            "path": "/path/to/folder"
        "event": "add",
        "metadata": {
            "type": "file",
            "mimetype": "video/mp4",
            "path": "/path/to/folder/file.mp4"
        "event": "delete",
        "metadata": {
            "type": "folder",
            "path": "/path/to/folder/"

then you would have listener that actively listening to events and process them for variety of reasons.

if using rc/ or redis not possible we could have extra config section related to events for example


or could be command line flag

rclone mount foo:/bar --events-subscribe add,delete --events-listener /usr/local/process

Tried instead of plex_rcs ?

It seems to be the same idea as plex_rcs as it's watches cache for rclone or at least that what i got from reading the section related to rclone and google drive.

Perhaps I misunderstand something here because I am unfamiliar with Plex, but it sounds like you are describing something which you could leverage the polling system (that Gdrive supports).

The polling system, in case you are not familiar - polls Gdrive in intervals to say what files have changed since (-last-poll--timestamp-) and updates the VFS cache accordingly. (default is 1min polling rate). Just to be explicit - VFS cache is part of the mount, not the cache backend.

Assuming you are a single user who accesses your Gdrive and you rare upload from more than one place at once - you can set-
--dir-cache-time 8760h
--attr-timeout 8700h
--poll-interval 10
(if you want to have faster polling intervals, I wouldn't recommend lower though as this will use 1% of your API quota on it's own)

This will make the VFS cache effectively never expire - and ONLY being updated via polling, which is very efficient. You can additionally use a simple script to pre-cache the drive when you start it. That makes scanning that drive for changes of simple metadata incredibly quick.

I use this system myself to make my Gdrives basically as snappy to search/scan and browse as a local harddrive.

In this case - you can just let Plex scan on it's own as it should be blazing fast.

Or alternatively - if you want to be real fancy - you could see if you could find a way to send out a notification to Plex via RC any time that polling returns with non-empty results to trigger an immediate rescan of the affected directories. That way the updating will be immediate and as small as possible rather than depending on some simple timer to trigger (often redundant) scans.

While --dir-cache-time is generally safe to run long, using a long --attr-timeout technicaly could result in corruption under very spesific circumstances. These circumstances are very much avoidable for many use-cases though, but you should understand them before you use this. Here is the circumstances that would need to happen to potentially be risky:

  • Files are being updated from 2 different access-points (rclone instances or other progrms accesing the G-drive) at the same time
  • One file that access-point B just modified, is also attempted modified by access-point A
  • All of this happens before polling has a chance to register the change (so a faster poll rate helps mitigate this further)
  • The type of operation done needs filesize to be accurate and is unable to discover the problem as part of the operation. Most do, but a couple might not under certain circumstances.

So TLDR: A lot has to go wrong - and if the Gdrive is not a multi-user system it's pretty safe to do (and the performance benefits are well worth it). I would never recommend it for any multi-user setup though (where multiple participants are writing). If you have multiple computers on your network this is not a problem as you can use RC (or a simple network share of the mounted drive) to share a single access-point over the network rather than having them all run their own instances. This sidesteps the problem entirely (I use this myself).

I hope this gave you some ideas. Let me know if you need more specifics details on anything - but I didn't want to make this even longer than it already is :slight_smile:

Thank you for your ideas. my setup is really similar to what you described, i use low poll on cache mount to trigger change notifications in which plex_rcs grab the path/to/file and then send them to plex for processing. i just thought having native events system might be worthwhile for future plugins and systems.

Yes I agree it would. As I mentioned briefly it would allow the updating on Plex's side (or potentially any other application) to be done in a much smarter and more limited fashion - even if "dumb scans" would work too, just less efficiently.

You would probably need to make some minor update to the code to make this happen cleanly though. I'm sure there are already internal events that happen on polling. All you really need to do is expose that event through the RC so it can be picked up and used.

If you have coding experience, feel free to try your hand at this (make a pull request on the relevant files and see if you can find a solution before requesting integration and NCW will review it).
Otherwise - open a feature-request issue for it and explain in as much detail as possible what you are trying to achieve and also link this topic.

Indeed will try to do. My golang experience is limited will try to come up with something xD

NCW is super nice and willing to help you out when you get stuck on something spesific. I would heartily recommend you give it a shot if you think you have any chance. Nick loves it when he doesn't have to do every part of the gruntwork himself =P

(I really need to learn go myself so I can help out a bit with the simpler stuff that is keeping Nick from having more time dedicated to the complex things). I have no go experience (just like c++ and java mostly) so I probably can't help much with the actual coding, but feel free to summon me any time with a mention if you think I can assist with ideas or explaining how the components interact (to the best of my admittedly limited knowledge)

But it supports both the cache and the vfs mounts. Should solve your issue.

is that addon something that actually integrated with rclone in some way? Sorry, have no familiarity with those systems myself. Just asking out of curiosity :slight_smile:

    type = crypt
    remote = gdrive:/gdrive/crypt
    filename_encryption = standard
    password = **snip**
    password2 = **snip**
/usr/bin/rclone mount gnocache: /mnt/media \
--allow-other \
--fast-list \
--buffer-size 128M \
--dir-cache-time 100h \
--attr-timeout 100h \
--syslog \
--log-level INFO \
--timeout 1h \
--umask 002 \
--poll-interval 15s \

this my config and mount command i no cache backend nor vfs i think.

i use modified script of this to read changes and then push them to plex so media are updated within min instead of scheduled scans.

The VFS is currently bundled with the mount, so if you mount you use the VFS system.
Technically they are seperate layers, but mount needs VFS. VFS does not necessarily need mount, but there is no way to run it seperately right now because it was original implemented specifically to make mount work. NCW talked about possibly making this a standalone module in future though -allowing you to benefit from caching functionality without needing an associated mount (and it's disadvantages) but that's a topic for another day :slight_smile:

There are all specifically related to the VFS system.

It has multiple methods of operation actually:

  1. Directly polling drive for changes
  2. Sonarr/Radarr/Manual triggered file scans

Either of these will call a cache/refresh followed by a vfs/refresh if the file doesn't exist yet in the mount (in case it hasn't already been picked up by the polling of rclone itself).

If the drive polling option is used, it also supports decoding the crypted names received from drive via rc from rclone for the actual file name to be checked.

Ah, so it's more like an indirect cooperation rather than integration.
Sounds like that would work - but maybe be a little less efficient that somehow hooking directly on to the polling status returns in rclone.

Depends on the use case really.

If you rely on plex_autoscan and the Sonarr/Radarr triggered scans you also have some additional info like the metadata id of the item in TVDB, TMDB, IMDB etc. With these you could add functionality like notifications which contain links to the respective trailers, reviews etc. You also know what the exact quality of the file is, i.e. WebDL, Bluray, Remux etc in case you were waiting for a specific quality.

The rclone method is definitely more elegant and more efficient but it lacks the above info.

And obviously, plex_autoscan is already available whereas the rclone integration still needs to be coded.


Thanks for your contributions. I went ahead and made my own script instead of using plex_autoscan/plex_rcs

if anyone want to use it it's available via

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.