Cache crypt Sync

I tried to give -SIGHUP command but nothing seems to happen.
rclone doesn’t seem to be killed as PID is always the same after
I give:
kill -SIGHUP PID
Is it right?

@fbrassin

Ya that’s right, or kill -1 PID.

But this won’t kill the process, it will send SIGHUP (signal hangup) which tells the cache backend to evict the items from the cache. So the PID will stay the same, but if you go to your cache folder, you cache.db file should be much smaller.

Perfect, it works.
Do you think it will work also this way?
killall -1 rclone

I don’t what to do it right now as i just gave a kill and i don’t want to get a ban.
:grinning:

don’t do killall.

You can try pkill -1 rclone and that should work, as long as you don’t have multiple rclone processes going.

Nice,
asi want to put it in crontab to exec once a day

Does anybody know if ver.1.39 has got SIGHUP option?

Yes it does have SIGHUP.

1 Like

Looks like I’m a little late to the cache party!

Instead of trying to “sync” fresh files, one simple idea that might work for some setups is to just exclude certain paths by mounting extra uncached remotes on top of the cached one, eg:

# all the media, cached
rclone mount cache:/media /mnt/media

# this path updates often, so use the original remote and bypass cache
rclone mount plain:/media/incoming /mnt/media/incoming --allow-non-empty

It’s not exactly ideal since server-side moves between the mounts won’t be possible; the upside is those moves will be downloaded and written to cache in full, becoming immediately available to whatever other apps you have doing ban-happy scans

Edit #1: Oh, and there’s the whole mount-writes-don’t-get-retries thing. YMMV.

Wouldn’t it be very slow to stream these plain files from plex, because cache wouldn’t work on them?

Unfortunately that’s a horrible idea, sorry @putty182

If you exclude from cache the dirs that change often, which are often tv/movie show folders, then you’re almost certain to get banned quickly, and as @amaklp mentioned, it’d be slow too.

There is another way to overcome the issue with adding material more quickly than the cache updates, which is to use OS union mounts (eg for Linux either unionFS or the older AUFS).

Both of these ‘filesystems’ overlay different directories / mounts to form a single file system.

For example with this rclone setup:
encrypted_cloud => plexdrive => crypt (/mnt/cloud)

Instead of pointing Plex to the decrypted mount (/mnt/cloud), use UnionFS to combine with local storage (/mnt/local) to create combined local and remote mount (/mnt/plex_me).

Store any new material in /mnt/local first and then use rclone upload at your leisure to copy to the cloud (with encryption). At some point delete from /mnt/local leaving only copy in /mnt/cloud. Plex is none the wiser as it just sees a single seamless copy at /mnt/plex_me.

This is good mitigation in case plex goes crazy with scanning new material as it will hammer local drive only and not the rclone mount.

This approach works regardless of rclone setup. It’ll work for the cache setup here as well as the plexdrive setup.

MergerFs is working nicely in this overlay fs scenario. The symlinkify option comes in pretty handy.

1 Like

I might not have explained clearly; the point is to keep library folders cached for as long as possible, so when apps scan through, reading read a few chunks from every single file, they’ll be handled by cache (and not causing you an API ban).

Instead, set up dedicated “incoming media” folders, keep them out of cache, and only use them for short-term storage. If you already have this kind of setup in place and just need to bypass cache for a specific folder so it’s newly added files show up sooner, just stack an extra uncached mount on top using --allow-non-empty.

This is only viable if you’re okay with the extra bandwidth impact. Files moved from the uncached folder to a cached one will be slow, since Rclone will need to download and re-upload them instead of performing a server-side move. But, when it does this it’ll also keep a fresh copy in cache ready for the next Plex library scan to find (and ready for immediate playback).

a repeated library scan will not read any content from the files, just the directory structure and file meta data, such as modification time and file size.

Correct for most scans, but not when those scans find changes (ie: files with different timestamps or sizes).

You can test this out by clearing out your cache folder, and from a different machine replace a file with another that has the same name but a different quality (eg: upgrade from 480p to 720p). Filename stays the same, but both Plex and Sonarr will notice the timestamp change and load a little bit of the file to work out what the new quality is. If you also look in rclone’s cache-backend folder you’ll see a chunk called “0” with a last modified time matching when the scan occurred.

This happens even with all the relevant “Analyze video files” options disabled; they get ignored during imports.

IMO, Gdrive staff are likely less worried about the amount everyone’s storing and more worried about how often it gets requested, since that has a more immediate and unpredictable impact on their infrastructure. The mystery ban-hammer kicks in to keeps things under control (for them, not us).

Asking for file metadata (files.list) is a much easier operation and only counts towards the well documented quota

what difference does it make? plex will read from the new/changed file one time - all subsequent scans will not read the file again. If the file changes again, the cache will not help, it gets invalidated.

It’s to help with this.

@putty182

That’s a lot of work for what appears to be very little benefit.

What I’ve started doing is mounting my mount as r/w and just doing an rclone move /mnt/local /mnt/remote where /mnt/local is a unionfs mount.

This way you get the retries on /mnt/remote because it’s mounted as an rclone cache mount and uses the option --vfs-cache-mode writes.

This has been working flawlessly for me for about a week now.

1 Like

This sounds like a great idea but how do I handle external changes to GDrive?

I have separate servers, one that downloads media and uploads it to the cloud, and one that is a dedicated Plex Server. My library is over 50TB, with loads of Movies and TV Shows.

one of these options are needed for external changes:

  • using the google drive log
  • signal the rclone mount process to selectively invalidate a subdir

without it, the cache will be completely purged and you will risk a ban