is there a way to have rclone use a database to track file changes, as opposed to just comparing the source/destination for every sync? when using rclone sync with large drives, such as 400,000+ files, i keep getting throttled due to exceeded activity. (OneDrive Business is my remote)
a large part of what causes the problem, actually, is rclone's method of dealing with an errror. as rclone is checking the source/destination for differences if at progress 300,000 there is one service error (usually it's serviceunavailable) rclone will finish checking the 400,000 files, and then check them all again because of the ONE error. can't it just check the error file? i quite frequently land up having 1,600,000/1,600,000 files checked. that's where i end up getting throttled.
the error does retry; rclone will retry the sync 3 times. but that means it ends up checking all over the several hundred-thousand files 3 times. that's what i can't have.
i'm using rclone 1.53.0. i'm not sure how to extract the error log if you tell me the flag to use i'd be more than happy to.
The --max-age 1h will mean that only files younger than 1 hour will be copied and the --no-traverse will mean rclone does not look through the entire destination file tree.
Note that this uses rclone copy not rclone sync so it only copies new file, it doesn't delete any files on the destination.
This works very well for backing up recently changed files quickly. You still need to do a sync to propagate deletions but you can do it less often.
alright man. really glad to hear it's being addressed in the next couple of releases. as simply of a utility as rclone is, it's the perfect backup solution for me.
in the meantime. i'll use the work-around you mentioned.