Rclone mount settings to pre-cache data and emulate google drives "always available" option

What is the problem you are having with rclone?

I'm trying to use rclone mount options to ensure parts of my google drive are always available/cached locally. I can get it to pre-cache by simply preforming a recursive cp of the path I want cached, to somewhere like tmp, then removing it afterwards.

However when the cache invalidates due to changes on the remote side, it recognizes this, but doesn't pre-cache the changes. So when accessing there is a cahce-miss, and it takes time then to download the updated file. I've dug through the documentation, and I don't think I've missed an option to pre-cahce the data/update on cache invalidation.

Is this use case simply not supported with rclone (yet?), or is there something I'm missing/a better way?

I'm doing this on both a raspberry pi, and on WSL.

Run the command 'rclone version' and share the full output of the command.

pi:
$ rclone version
rclone v1.58.1
- os/version: raspbian 11.3
- os/kernel: 5.15.32+ (armv6l)
- os/type: linux
- os/arch: arm
- go/version: go1.17.9
- go/linking: static
- go/tags: none

wsl:
$ rclone version
rclone v1.58.0
- os/version: debian kali-rolling (64 bit)
- os/kernel: 5.10.102.1-microsoft-standard-WSL2 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.8
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount gdrive:rclone-lich /home/user/gdrive/lich --daemon --cache-dir /home/user/.rclone-cache/lich --vfs-cache-mode full --vfs-case-insensitive --dir-cache-time 1000h --vfs-cache-max-age 1000h --poll-interval 15s --vfs-cache-poll-interval 15s

The rclone config contents with secrets removed.

$ rclone config show
[gdrive]
type = drive
client_id = [redacted]
client_secret = [redacted]
scope = drive
token = [redacted]
team_drive =

A log from the command with the -vv flag

I removed a ton of repeated entries regarding files not being cleaned up (vfs cache RemoveNotInUse ) but left the ones regarding the file I was testing with).

Paste log here

There's no flag or feature to do that as that means rclone would download all the time. It works more like a traditional mount so you only read what's being asked.

You can just script reading the files if that is the goal based on the logs or if something changes.

Why would it download all the time? It would only be downloading when there was a notified change at the cloud provider.

So the general thought is you only want rclone to download what it's requesting to use so downloading changes on its on would be a bit strange.

Sorry as I was focusing more on that. My mount changes a lot so it would download all the time for me, which I would not want.

Feel free to just log a feature request and it may sit or if someone really wants to do it, it can get implemented, but the backlog is huge and it's an edge type case imo (I could be wrong and perhaps a GO person would love to do it as my opinion is just me and means as much as yours :slight_smile: )

You could use rclone hashsum crc32 to do this also if you wanted.

This isn't supported at the moment I'm afraid. All the change polling does is invalidate the cache. Ideally there would be a hook or an rclone rc command to show you recently invalidated files.

hi,

perhaps a script using rclone test changenotify could be used.

edit: did a quick test and the logging does not seem to document the reason for the change.
as the entry is always the same, does not indicate Deleted, Created, etc...
NOTICE: "file.ext": 1
and in the rclone mount log
DEBUG : : changeNotify: relativePath="file.ext", type=1

1 Like

I've considered something like this in the past. I do not have a working prototype yet but I have some rough ideas.

The key is to use a union remote with the paths you want locally and the rest on the remote.

The process will look something like:

  • Download the paths you want locally to your local machine
  • Mount union remote with ep type rules
    • There is not, to my knowledge, a way to say "always write to remoteA" but epff should always find local first. Or it may write to all but that is fine!
    • (aside, @ncw, a "create" policy of a specified remote would be really useful)
  • Occasionally (or when done) push the local to the remote (make sure to copy, not sync!!!)

Again, there may still be kinks to work out in that plan. You may also need to pull files occasionally if the remote has updated. You can use newest policy as well to make sure you then work on the latest.

Alternatively, if your structure and files you want local are amenable to it, use bisync (or my tool, syncrclone, which has some differences. But note I am biased) to keep a top-level in sync and then mount the rest,

I hope this helps

Do you mean create on only the specified remote? That could be a first-only policy which would be very easy to write.

Yes. It will help with some tasks like this. The general idea would enable something like a more persistent VFS cache in the use-case I described above. Or really any time you do a local + remote union so that things happen on the local.

It is on my list of tasks when I finally have time to learn golang but, alas, work and family keeps pushing that down the todo list (which is okay!)

A first-only policy would be quite easy to create - you'd take an example policy and then for all the methods make it return the first item in the array.

I think it would be as simple as that.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.