I have been running into problems when my NAS is crashing and shutting down.
I have currently an infinite loop running in a ubuntu VM with command.
rclone -c -v sync /media/f secret:/Backups/VeeamZipBackups/ --backup-dir secret:/old/ --suffix="-date-$(date +"%Y-%m-%d-%H-%M-%S")" --copy-links --exclude-from /opt/rclone-exclude.txt --syslog
However, when NAS crashes nfs mount on my VM cannot read file and instead moves all files to backup-dir. Wondering if there is any way to check pre specified file access in source and halt the move as the file does not exist because source is not readable.
[quote=“mdhl, post:1, topic:7092”]
However, when NAS crashes nfs mount on my VM cannot read file and instead moves all files to backup-dir. [/quote]
If rclone received an error trying to read the file then the correct thing would happen.
So are you saying that when the NAS crashes the nfs mount returns empty listings with no errors? That doesn’t sound right? Or is rclone just reading the empty mountpoint?
That is easy to do with a bit of shell scripting, something like
[ -e "/path/to/canaryfile" ] && rclone -c -v sync ...
When NAS crashes. Rclone is reading empty mount point. Shell scripting would only work if it is at begining of sync. Will it work midway through sync ?
Appreciate your reply.
Hopefully reading from the directory mid-way through the sync will generate an error if it goes away - that is what should happen anyway.
Thank you Nick, between your suggestion and --max-delete flag I think I have solved my problem.