Local Database for Large Remotes

Hi All,
I'm curious if the rclone team has recently given thought to the local database for tracking changes to improve sync performance on large remotes (200k files plus). some major cloud providers, microsoft and google in particular, are rather stingy with iops on their business cloud storage solutions. In the case of onedrive business, it permits about 50k checks before it kicks the client out for a few minutes before allowing a few more checks. less than 50k the second round.

if the changes were tracked using a local database, and all the sync or copy checks were done locally, it would reduce the iops by 100's of thousands in every operation. I know nick said they started on such a feature many moons ago, but haven't really gone back to finish it. Wouldn't it be a valuable addition to this amazong tool? or am i the only one who thinks so? don't seem to find much info on this idea anywhere. looking forward to your guys' thoughts.

Jared

imho, the beauty of rclone it is simple and stateless.

somewhere in the forum,
there is a script that uses rclone ls on the source and dest, creates a sql database table.
and from that, creates and --files-from

Already a feature request for that:

vfs: option to cache metadata only · Issue #5123 · rclone/rclone (github.com)

Feel free to add/help/etc.

wow. there's some pretty sharp people out there.. tis a bit over my head to deploy though.

seems like it's kind of on the back-burner :smirk: but let this be my humble addition to the list of people who request this. keep up the good work.

Jared

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.