I'm trying to use rclone mount options to ensure parts of my google drive are always available/cached locally. I can get it to pre-cache by simply preforming a recursive cp of the path I want cached, to somewhere like tmp, then removing it afterwards.
However when the cache invalidates due to changes on the remote side, it recognizes this, but doesn't pre-cache the changes. So when accessing there is a cahce-miss, and it takes time then to download the updated file. I've dug through the documentation, and I don't think I've missed an option to pre-cahce the data/update on cache invalidation.
Is this use case simply not supported with rclone (yet?), or is there something I'm missing/a better way?
I'm doing this on both a raspberry pi, and on WSL.
Run the command 'rclone version' and share the full output of the command.
I removed a ton of repeated entries regarding files not being cleaned up (vfs cache RemoveNotInUse ) but left the ones regarding the file I was testing with).
There's no flag or feature to do that as that means rclone would download all the time. It works more like a traditional mount so you only read what's being asked.
You can just script reading the files if that is the goal based on the logs or if something changes.
So the general thought is you only want rclone to download what it's requesting to use so downloading changes on its on would be a bit strange.
Sorry as I was focusing more on that. My mount changes a lot so it would download all the time for me, which I would not want.
Feel free to just log a feature request and it may sit or if someone really wants to do it, it can get implemented, but the backlog is huge and it's an edge type case imo (I could be wrong and perhaps a GO person would love to do it as my opinion is just me and means as much as yours )
You could use rclone hashsum crc32 to do this also if you wanted.
This isn't supported at the moment I'm afraid. All the change polling does is invalidate the cache. Ideally there would be a hook or an rclone rc command to show you recently invalidated files.
edit: did a quick test and the logging does not seem to document the reason for the change.
as the entry is always the same, does not indicate Deleted, Created, etc... NOTICE: "file.ext": 1
and in the rclone mount log DEBUG : : changeNotify: relativePath="file.ext", type=1
I've considered something like this in the past. I do not have a working prototype yet but I have some rough ideas.
The key is to use a union remote with the paths you want locally and the rest on the remote.
The process will look something like:
Download the paths you want locally to your local machine
Mount union remote with ep type rules
There is not, to my knowledge, a way to say "always write to remoteA" but epffshould always find local first. Or it may write to all but that is fine!
(aside, @ncw, a "create" policy of a specified remote would be really useful)
Occasionally (or when done) push the local to the remote (make sure to copy, not sync!!!)
Again, there may still be kinks to work out in that plan. You may also need to pull files occasionally if the remote has updated. You can use newest policy as well to make sure you then work on the latest.
Alternatively, if your structure and files you want local are amenable to it, use bisync (or my tool, syncrclone, which has some differences. But note I am biased) to keep a top-level in sync and then mount the rest,
Yes. It will help with some tasks like this. The general idea would enable something like a more persistent VFS cache in the use-case I described above. Or really any time you do a local + remote union so that things happen on the local.
It is on my list of tasks when I finally have time to learn golang but, alas, work and family keeps pushing that down the todo list (which is okay!)
A first-only policy would be quite easy to create - you'd take an example policy and then for all the methods make it return the first item in the array.