I’m trying to set up
rclone as a simple backup mechanism. So far it’s working very well.
However, sometimes I’m surprised at how long
rclone takes, even when there is nothing to do.
Example, I have a directory called
src with about 13,000 files in it. The first
rclone copy (to encrypted
s3 storage) was of course very slow. As expected, subsequent backups were much faster (as very little has changed).
What confuses me is that subsequent runs of
rclone copy take the same amount of time (about 10 minutes) regardless of what filters I give. For example:
rclone copy src Backup:src --max-age 10d
Should be nearly instant, as (at the moment) there are absolutely no files which have changed in the last 10 days. [For comparison’s sake, the equivalent
find src -type f -mtime -10d takes less than a second to complete.]
Any ideas about how to make incremental backups more efficient?