Cache crypt Sync


#1

Hello,
I think someone already answered this question, but i didn’t find how.
I managed to mount my google crypt drive cache, but i don’t know how to sync it.
The only way i found is to umount and remount with --db-purge but it is not a good way.
Which is the right way to sync?
I’m using ubuntu
Tnx


#2

What problem are you trying to solve by syncing it?


#3

Actually maybe sync is not the right term.
I mounted my cache crypt, but it seems it never updates when something changes on gdrive.
I mounted in this way:
rclone mount --cache-db-purge --allow-other gcache: /media/google_crypt/


#4

That is coming in a future release. For the moment, just visit every file in the filesystem, eg rclone size /media/google_crypt that should do it.


#5

Tnx gor the hint.
So i can make a crontab every hour with that cmd to have it updated?


#6

Yes, but don’t do it too often otherwise you’ll get banned most likely!


#7

What do you think about once a hour?


#8

I think that will probably be OK since it is just doing a listing rather than reading parts of files like a plex scan does.

I think you’ll have to try it and see!


#9

Tnx, i’ll report if i get any ban


#10

Actually if i give this “rclone size /media/google_crypt” it does not update the list.
I have a new movie on gdrive, but i cannot see it after that command, while on another server where i don’t use cache but normal rclone i see it.
Any hint?


#11

@remus.bunduc what do you think? Should that work?


#12

If you want to resync it manually, use the beta provided in this issue: https://github.com/ncw/rclone/issues/1906
You can send a SIGHUP and cache will delete every information cached.

But that’s very drastic. It will delete anything stored in cache (of course it doesn’t touch the source).
There’s basically just 2 types of info stored in cache:

  • file information expires on its own with –cache-info-age so when a folder is first seen through cache, its listing (what it contains) will be cached for whatever value you set there. After that, cache will request the information again from the source
  • chunks expire after the –cache-total-chunk-size is reached

#13

Tnx, but how can you send a SIGHUP?


#14

It would depend on which OS you are but it’s essentially a kill command.
Here’s the linux/mac version: https://bash.cyberciti.biz/guide/Sending_signal_to_Processes
For the other OS out there, you can google it and find the answer I’m sure.


#15

@remus.bunduc

This effectively kills the usefulness of the cache remote to anyone who uses Plex and regularly adds media.

If you have TBs of data and you add one small file, you have to completely nuke the cache or your cache remote won’t see it.

Set the info age duration too low (so you can see the file) and you’re likely to be banned for rebuilding the cache too often.


#16

A SIGHUP is usually performed to do operations like these. It’s a way to evict the entire cache if one needs this. And since it was a feature request I don’t see why it would be a bad thing. Clearly someone needed it and it did came with the warning about what it does :wink:

For any other use case the info age duration is the best way to keep files in sync remotely. Next thing would be file changes notifications from the cloud provider which work like a push notification.


#17

But you understand the dilemma right?

People like me, the OP and many many others I suspect modify their remotes many times a day and many of us have really big Plex libraries.

99% of our file structure never changes.

So if we send a SIGHUP to evict the cache then we’ll have to do it multiple times a day and it will likely cause a ban

If we set info_age duration to a very low value (so that new files we add multiple times a day appear quickly), then because our file structure doesn’t change, most of the cache will expire at the same time.

That’s because Plex or any other app that scans the tree will do so and the info age for all those files and directories will essentially be the same.

So I think when this gets released you’ll have people complain that either they can’t see new files uploaded to the remote (like the OP) or they are getting banned because they have to evict the cache too often / set info_age too low (to see the files) which may as well be like sending a SIGHUP anyway.


#18

Yeah that’s a problem for me too. I have a Plex library of 10TB with 2 thousand movies, and I can’t really figure out a good way to add a new movie to my Plex library without having to scan again the whole drive.
Maybe a command to refresh only a specific folder would do it?


#19

SIGHUP to rclone, then tell plex to only scan the specific folder via command line/script


#20

If you’re running any other apps on the server (Sonarr, Radarr, CouchPotato, etc.) the next time one of those apps schedules a refresh of its database, it will re-scan the whole folder tree again.

So you’re likely to be scanning and rebuilding the whole cache multiple times per day, which will probably result in a ban sooner or later (as well as slow down your system).

Check top when doing an ls -R on a new cache DB (while it’s rebuilding). or just nuke your rclone cache and have Plex or Sonarr run either a library refresh or a series scan and your system will likely start to crawl.