New index backend for checkers


In rclone sync or copy checkers do there work but this at times could run longer than transfers.

Eg: rclone sync with millions of files where just about 50 new files need to be transferred.

Is there a possibility to create a new rclone backend for checkers like hasher.

Role: To index files in the base backend (eg:S3) containing details like location/filename, size, timestamp, etc. Can be stored in .cache/rclone/s3.index

Usage: used like hasher.

Advantage: if 2 index remotes are created eg: one for S3 and other for Dropbox.
rclone sync -Pv s3indexed: dbindexexed:
The above comand will quickly read both the index files in .cache/rclone/*.index and lookup new files in S3 to transfer to Dropbox.

Comments :slight_smile:

1 Like

Seems like a good idea for my use case, syncing many files between google and dropbox daily.

There's already a feature to cache only metadata:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.