Having used rclone for ages I feel a bit dumb asking this question. But I searched through prior posts and couldn't find this particular issue addressed. If it has already been answered, a link to the post would be much appreciated.
If I run the following commands, is there any way to get rclone to cache the directory between the first and second command without a mount being present?
rclone ls remote: --include aaa* rclone ls remote: --include bbb*
Most of the documentation related to persistent caches seems to be for vfs/cache mounts.
To give a technical answer here - currently it is not possible to run the VFS-layer (that handles the VFScache) withut the mount. These are considered to be part of the same system.
However... I have discussed this with Nick before, and it should in principle not be a problem to disconnect the VFS from the mount such that you can use the VFS-layer without the mount (the mount will still require the VFS-layer by necessity). This would certainly have a lot of good benefits for advanced users so I understand this is what the eventual goal will be.
However it will take some work-intensive restructuring and some patience is likely needed
In your case specifically, the practical solutions availiable to you are basically what Animosity said. --fast-list (if available on your remote type) will probably fix the issue for the most part as it can be as much as 15x faster - while running the cache-backend would actually cache the listings like you are asking for. I'm not sure if I really recommend the cache backend at this point because it has issue and will be phased out eventually - but the caching of listings I think work well enough.
One last solution - if you have a remote capable of "ChangeNotify" like Gdrive is to use a scripted OS-precache like I use. This basically makes a RAM-cache for the whole file hierarchy in the OS-cache at mount startup that will be kept up-to-date via changenotify. That allows for some neat tricks like searching and filtering files on a clouddrive at SSD-speeds (well, RAM speeds really). I won't put a whole tutorial here because it a bit more involved than just adding --fast-list to your commands but if you are interested then throw me a PM and I can share my scripts for it and explain in more detail how that functions.
Thank you @thestigma The first part of your reply guesses exactly what I was getting at ( non-mount, temporary query-related caches for repetitive queries that might be looking at the same directories). I have seen and use a variant of your excellent pre-cache scripts! Thank you for sharing those.
^^ Completely understood. Happy to wait patiently.
Also thank you @Animosity022 for your reply. I wrote the question in short hand hoping that someone would intuit that I was asking about non-backend --flags. I should have been a bit more specific (sorry, was late at night when I popped off the question).
It does solve the specific example I cited, thank you.
I do use --fast-list, --checkers and other options to speed up queries when needed. But in this case I was looking for a --flag based option for keeping a directory listing in cache for some period of time without creating a cache remote for each remote being queried. It was my fault for not being more specific - apologies.
A separate-but-related question, which I can move to a new post if that is better:
When rclone ls remote --include aaa* is run, is the aaa* filter being applied in the backend itself (that is, google filters the results before returning the answer to rclone) or is the full ls reply being returned locally to rclone and the --include filter is applied locally?
@ncw I think a lot things like these could be fixed by separating the VFS-layer from the mount. We've discussed this in the past, but I don't know how fresh it remains in the current ongoing agenda. It's an idea to keep in mind next time decide to overhaul the VFS systems. Also looking forward to that async upload queue function. I think that will not only be convenient but also solve a lot of current issues.