hello I regularly backup a folder of 500 GB with sync option.
this has been going on for many months now.
however the folder contains files in writing that will be constantly failing but that does not matter in my case.
at each save these files have a size and a different name so I can not filter against that.
is it possible that rclone can detect files that changes during the sending and thus automatically cancel the transfer because some are more than 100 GB and so it takes almost 1H for rclone realizes that the source and origin are different …
I ask the question at random to avoid the use of unnecessary resources.
yes in fact it would be necessary to detect a change of crc for example in order to avoid waiting for the end of the transfer processing and to see that there is an error.
if that would be possible I think it will prevent file corruption.
I imagine a loop every 10 seconds that can control a fast crc on large file and that will allow the user to move to the next while notifying him the impossibility of copying following a modification of this file …
What I’m imagining is that during a transfer in the local backend, rclone would check the size and modification time of the file it is transferring every 10s (say). If the size or mod time differs from when the transfer was started then it will abort the transfer.
I think checking any kind of CRC or hash would be too expensive (think of the disk IO to read a 1GB file every 10s).