I'm trying to setup a cronjob to automate rclone sync but I'm running into some issues with it appropriately checking to see if the script is already running. I found the following bash scrip that is supposed to hanlde this...
#!/bin/bash
if pidof -o %PPID -x “rclone-cron.sh”; then
exit 1
fi
rclone sync …
exit
This in practice appears to start the script and run it as expected. However, the check to see if the script is already running does not appear to work as expected...
cronjob
12 10 * * * /home/bk/Documents/rclone-cron.sh
bk@bkp-VM:~/Documents$ date
Wed Dec 25 10:12:14 EST 2019
bk@bkp-VM:~/Documents$ ps aux | grep rclone
bk 1640 0.0 0.0 4628 816 ? Ss 10:12 0:00 /bin/sh -c /home/bk/Documents/rclone-cron.sh
bk 1641 0.0 0.0 19992 2968 ? S 10:12 0:00 /bin/bash /home/bk/Documents/rclone-cron.sh
bk 1643 18.0 0.2 135844 48544 ? Sl 10:12 0:00 rclone -q .....
bk 1651 0.0 0.0 21536 1060 pts/0 S+ 10:12 0:00 grep --color=auto rclone
From above, we can see that the cronjob has started. I can also clearly confirm by the containers CPU and network resources that data is being encrypted and sent to gdrive. However, I can start another instance of this script...
bk@bkp-VM:~/Documents$ ./rclone-cron.sh
2019/12/25 10:12:34 ERROR : nextcloud/#recycle: error reading source directory: failed to read directory entry: readdirent: permission denied
(FYI the error for nextcloud is expected at the start of a successful sync session in my environment).
Open a new terminal to test while script is running...
bk@bkp-VM:~/Documents$
bk@bkp-VM:~/Documents$ cat rclone-cron.sh
#!/bin/bash
if pidof -o %PPID -x “rclone-cron.sh”; then
exit 1
fi
rclone -q .......
exit
bk@bkp-VM:~/Documents$ pidof -o %PPID -x “rclone-cron.sh”
bk@bkp-VM:~/Documents$ pidof
bk@bkp-VM:~/Documents$
Unless I'm misunderstanding the expected behavior of pidof, it doesn't seem like it's returning anything, even when no arguments/flags are fed to it.
I continued onward and thought I found the following script to handle my needs..
#!/bin/bash
dupe_script=$(ps -ef | grep "rclone-cron.sh" | grep -v grep | wc -l)
if [ ${dupe_script} -gt 2 ]; then
echo -e "rclone sync script was already running."
exit 0
fi
rclone -q ....
This appears to work when testing outside the conjob, but not when the cronjob is actually called...
cronjob config
41 9 * * * /home/bk/Documents/rclone-cron.sh
bk@bkp-VM:~/Documents$ date
Wed Dec 25 09:42:00 EST 2019
bk@bkp-VM:~/Documents$ ps -ef | grep "rclone-cron.sh" | grep -v grep | wc -l
0
bk@bkp-VM:~$ ps aux | grep rsync
bk 1261 0.0 0.0 21536 1144 pts/0 S+ 10:27 0:00 grep --color=auto rsync
bk@bkp-VM:~/Documents$ ./rclone-cron.sh
2019/12/25 09:42:16 ERROR : nextcloud/#recycle: error reading source directory: failed to read directory entry: readdirent: permission denied
I neither see the output in ps aux and the check statement within the script comes back at zero. Again, looking at the containers resources, it's clear to correlate that rclone is not running. You can see the last line above is me manually running the script without problem. I open up another terminal below to test while that script is manually running...
bk@bkp-VM:~/Documents$ ps aux | grep rclone-cron
bk 1270 0.0 0.0 19992 3084 pts/0 S+ 10:31 0:00 /bin/bash ./rclone-cron.sh
bk 1434 0.0 0.0 21536 1004 pts/1 S+ 10:31 0:00 grep --color=auto rclone-cron
bk@bkp-VM:~/Documents$ ps -ef | grep "rclone-cron.sh" | grep -v grep | wc -l
1
bk@bkp-VM:~/Documents$
bk@bkp-VM:~/Documents$ ./rclone-cron.sh
rclone sync script was already running.
So with this script, it appears to run in the exact expected behavior outside of a cronjob, but if added to a cronjob it never starts...?
OS: Ubuntu 18.04.3 LTS
rclone v1.50.2
- os/arch: linux/amd64
- go version: go1.13.4