I have a cron job to backup a local folder to S3 using sync. Occasionally it fails because a file was being written at the time of the sync: “corrupted on transfer: sizes differ”. This is expected from time-to-time during normal operation. What are my options for preventing rclone from failing in this scenario? Is there a way it can ignore errors on open files? Do I just bump up the retries?
You could try with:
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
set it to 120m or more so it skip those files.
I think this will become very much less likely when I’ve merged #517 as rclone will sync the files much sooner after reading the directory.
At the moment it scans the whole directory before doing any transfers - with the above change it will scan then copy in quick successions.
Have a go with the beta in that ticket if you like
more retries is a reasonable solution too!
rclone could stat the file just before transferring it but there is still an opportunity for stuff to go wrong.
Unfortunately it isn’t that easy to discover what files are open in a cross platform way.
I assume min-age is checking modification time ?
I’ve merged the new sync method now which should help if you try the latest beta.