How Does Caching Work

I’m looking at the new caching functionality, and I’ve set it up for use with Plex, but I have some questions that the documentation doesn’t answer.

  1. How is the cache built? Is it done on demand?
  2. Once the cache is built, how does it get updated if I add files by rclone move (or copy)? In other words, how does the cache pick up changes to the remote?
  3. Does the cache expire after the info_age regardless of whether it has been accessed within that time period?
  4. Does the cache expire if I unmount the remote? In other words, would the cache survive a system reboot?
  1. The cache remote wraps around another remote just as the crypt remote and is filled with directory & file information as well as file data upon it is requested. the directory & file metadata is saved in a (bolt) DB. File chunks are saved in as separate chunk files in the filesystem.
  2. the cache gets only updated if you write/upload through the cache remote - external changes will only be picked up if you clean & rebuild the cache or if the info_age expires for a requested subdir. reading the google drive activity log is not yet implemented.
  3. all cached items expire once they are older then info_age. they surely have been accessed, otherwise they would not have been cached :wink:
  4. the cache state is persistent as it is saved to the (bolt) DB and will survive a remount/reboot. you can however start the mount with --cache-db-purge to manually reset the cache db.

one more thing: it is not possible to use a rclone cache remote concurrently more then once , the db will be locked. (e.g. no rclone ls cacheremote: while a mount is active for cacheremote:)

1 Like

Ok. So, I have mounted the cached remote on the file system. That’s what Plex uses. The cache builds when the files on the remote are listed or accessed.

When Plex does a library scan, it will essentially cache the entire media folder and that will live for the length of info_age.

I currently use a union mount to keep recent media local and older media on the remote. If I move files that are cached to the remote and Plex does another scan before the info_age period expires, Plex will show those files as unavailable until the next time the cache is generated. Is there any way around this?

you yould send a SIGHUP to the rclone process. this will purge all cache information and newly added content will show up.

BUT: if you do this and rescan the whole library every time you will most likely get a 24h ban. instead of rescanning whole sections in plex you could use the command line client to only scan subdirectories for new content to avoid a ban.

Yeah. That’s what I do now. I have automation that manages all that. I was just exploring the caching option. It looks like it’s not going to work out though.

I’m interested in the Plex integration though. I’ll have to investigate that a bit more.

Can you give me an example of this, please?
Tnx

sure, here you go:

sudo -u plex -i LD_LIBRARY_PATH='/usr/lib/plexmediaserver' '/usr/lib/plexmediaserver/Plex Media Scanner' --scan --refresh --section 2 --directory /home/plex/content/something/subdir/

1 Like

I used this but had an error, any idea?

plex -i LD_LIBRARY_PATH='/usr/lib/plexmediaserver' '/usr/lib/plexmediaserver/Plex Media Scanner' --scan --refresh --section 2 --directory /media/google_crypt/crypt/
TP Lex Version 4.1a [April 2000], Copyright (c) 1990-2000 Albert Graef
invalid option -i

the command you want to run is not plex , that is the user it is supposed to run under, hence the sudo -u plex -i.

the actual command is LD_LIBRARY_PATH="/usr/lib/plexmediaserver" "/usr/lib/plexmediaserver/Plex Media Scanner" --scan --refresh --section 2 --directory "/home/plex/content/something/subdir/"

OK now it’s clear. It works fine.
Tnx

Until the activity log feature is there this is what I do

info_age is set to 1h

I use rsync to sync files from local to the rclone cache mounted folder instead of rclone move (The cache mount is mounted as read-write)

For the mount command I do use --vfs-cache-mode writes to make the writers via rsync more reliable.

Been doing this for two weeks and no issues. This way the new files I rsync are automatically added to the cache.

it will be cached completly on disk before upload though. a lot of DiskIO - too much for my taste.

I can see that being an issue if you have spinning rust.

I make it a point never to use spinning rust :slight_smile: SSD or Nothing.

You could rsync directly to the cache but If there’s an error uploading it won’t retry.