File and Directory cache (no VFS) - quick backup only changed files on directories with >500000 files

Hi, since there is a big warning not to use "cache" module, is there any alternative for regular copy/sync (not mount)?
I have ONEDRIVE(personal)->UNION->CRYPT->COPY/SYNC workflow (on Windows) to backup my computer files, but I have more than half a million of files. It takes hours to re-scan file structure on remote ONEDRIVE and only few files are actually updated every time.
Official Onedrive client don't do this, it has cache inside (it re-scan everything from time to time thought) and somehow query onedrive if the synced directory was changed externally (using web for example).
Can something similar be achieve in rclone? I tried everything related to cache, but it didn't create any cache files in rclone temp directory.
It would be nice to cache everything (dir+files metadata, not content of course) when listing from remote and until I delete the cache, it will take this cache (only rclone will update cache on upload/change of files, I'm not touching remote files from any other program).

So if I'm not missing something and there is no such functionality, what do you suggest?
I'm thinking using either cache module or "misuse" mount?
Is it safe to use cache module?!?
If I mount with only metadata caching (it should be possible according to another discussion posts, right?), then I can probably sync it using another instance of rclone (hope it can be setup as online and don't create buffered files in temp - it did the last time I played with mount), but I guess I will lose multi-threaded uploads and inter-server moves (if I rename directory etc.) and the whole idea of using mount just to cache metadata don't feel right:)
I just want to quickly propagate updates (ideally with --backup-dir option to backup versions on remote) every hour (or so) with only changed files and then, delete cache every month and do full sync check to be sure.
Any suggestions?

One solution would be assuming you run sync every hour

rclone copy --max-age 1h --no-traverse src remote:

no-traverse will save you time for scanning all destination every single time.

Then you could run full rclone sync once in a while to make sure nothing is missing and deleted files from source are deleted from destination.

It is not possible today but there is some ongoing work to make it happen in the future.

In addition for what you described and what I understand as cloud based backup with versioning I think there are much more efficient programs available than rclone.

Personally I would recommend restic. For your setup with onedrive, union and crypt it will have to use rclone anyway but only for files' transfers. I use it personally and with 500k files and only some changing delta backup will take probably a minute - maybe few - depending on how much data has to be transferred.

Thank you. I will try --max-age 1d, it is a very good idea. Also I will look at the restic. Thanks.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.