Bash script/cronjob for automating rclone sync

I'm trying to setup a cronjob to automate rclone sync but I'm running into some issues with it appropriately checking to see if the script is already running. I found the following bash scrip that is supposed to hanlde this...

#!/bin/bash
if pidof -o %PPID -x “rclone-cron.sh”; then
exit 1
fi
rclone sync …
exit

This in practice appears to start the script and run it as expected. However, the check to see if the script is already running does not appear to work as expected...

cronjob
12 10 * * * /home/bk/Documents/rclone-cron.sh

bk@bkp-VM:~/Documents$ date
Wed Dec 25 10:12:14 EST 2019
bk@bkp-VM:~/Documents$ ps aux | grep rclone
bk     1640  0.0  0.0   4628   816 ?        Ss   10:12   0:00 /bin/sh -c /home/bk/Documents/rclone-cron.sh
bk     1641  0.0  0.0  19992  2968 ?        S    10:12   0:00 /bin/bash /home/bk/Documents/rclone-cron.sh
bk     1643 18.0  0.2 135844 48544 ?        Sl   10:12   0:00 rclone -q .....
bk     1651  0.0  0.0  21536  1060 pts/0    S+   10:12   0:00 grep --color=auto rclone

From above, we can see that the cronjob has started. I can also clearly confirm by the containers CPU and network resources that data is being encrypted and sent to gdrive. However, I can start another instance of this script...

bk@bkp-VM:~/Documents$ ./rclone-cron.sh
2019/12/25 10:12:34 ERROR : nextcloud/#recycle: error reading source directory: failed to read directory entry: readdirent: permission denied

(FYI the error for nextcloud is expected at the start of a successful sync session in my environment).

Open a new terminal to test while script is running...

bk@bkp-VM:~/Documents$
bk@bkp-VM:~/Documents$ cat rclone-cron.sh
#!/bin/bash
if pidof -o %PPID -x “rclone-cron.sh”; then
exit 1
fi
rclone -q .......
exit
bk@bkp-VM:~/Documents$ pidof -o %PPID -x “rclone-cron.sh”
bk@bkp-VM:~/Documents$ pidof
bk@bkp-VM:~/Documents$

Unless I'm misunderstanding the expected behavior of pidof, it doesn't seem like it's returning anything, even when no arguments/flags are fed to it.

I continued onward and thought I found the following script to handle my needs..

#!/bin/bash
dupe_script=$(ps -ef | grep "rclone-cron.sh" | grep -v grep | wc -l)

if [ ${dupe_script} -gt 2 ]; then
    echo -e "rclone sync script was already running."
    exit 0
fi
    rclone -q ....

This appears to work when testing outside the conjob, but not when the cronjob is actually called...

cronjob config
41 9 * * * /home/bk/Documents/rclone-cron.sh

bk@bkp-VM:~/Documents$ date
Wed Dec 25 09:42:00 EST 2019
bk@bkp-VM:~/Documents$ ps -ef | grep "rclone-cron.sh" | grep -v grep | wc -l
0
bk@bkp-VM:~$ ps aux | grep rsync
bk     1261  0.0  0.0  21536  1144 pts/0    S+   10:27   0:00 grep --color=auto rsync
bk@bkp-VM:~/Documents$ ./rclone-cron.sh
2019/12/25 09:42:16 ERROR : nextcloud/#recycle: error reading source directory: failed to read directory entry: readdirent: permission denied

I neither see the output in ps aux and the check statement within the script comes back at zero. Again, looking at the containers resources, it's clear to correlate that rclone is not running. You can see the last line above is me manually running the script without problem. I open up another terminal below to test while that script is manually running...

bk@bkp-VM:~/Documents$ ps aux | grep rclone-cron
bk     1270  0.0  0.0  19992  3084 pts/0    S+   10:31   0:00 /bin/bash ./rclone-cron.sh
bk     1434  0.0  0.0  21536  1004 pts/1    S+   10:31   0:00 grep --color=auto rclone-cron
bk@bkp-VM:~/Documents$ ps -ef | grep "rclone-cron.sh" | grep -v grep | wc -l
1
bk@bkp-VM:~/Documents$
bk@bkp-VM:~/Documents$ ./rclone-cron.sh
rclone sync script was already running.

So with this script, it appears to run in the exact expected behavior outside of a cronjob, but if added to a cronjob it never starts...?

OS: Ubuntu 18.04.3 LTS
rclone v1.50.2

  • os/arch: linux/amd64
  • go version: go1.13.4

Can you try using this instead?

if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

Another option is to use file locks.

#!/bin/bash
LOCKDIR=${HOME}/.cache
# Get an exclusive lock or exit
function exlock() {
    exec {lock_fd}>${LOCKDIR}/$(basename $0).lock
    flock -nx "$lock_fd"
    if [[ $? == 1 ]]; then
        exit 1
    fi
}
# Cleanup lock file and exit
function unlock() {
    rm "${LOCKDIR}/$(basename $0).lock"
    [[ -n $1 ]] && exit $1
    exit
}

Then call exlock at the start of the cron job and ulock at the end.

1 Like

Hmm okay, good to have another option, thanks. After I posted the above, I fledged out the below script which appears to be working as intended. It just looks to see if the actual rclone sync command is actually running.

#!/bin/bash
dupe_script=$(ps -ef | grep "rclone" | grep -v grep | wc -l)

if [ ${dupe_script} -gt 0 ]; then
    echo "$(date +%F_%T): rclone service was started but already running." >> /logs
    exit 0

fi
    echo "$(date +%F_%T): rclone service started." >> /logs
    rclone -v sync /local/source  gdrive:/dest/folder --min-age 15m --log-file=/rclone-upload.log
    echo "$(date +%F_%T): rclone service has completed." >> /logs
exit

As for pidof, I'm pretty sure that's the base of the issue, not the syntax of the script. I have zero familiarity with it but wouldn't we expect it to output something? Based off my initial OP, you can see that's not the case....

bk@bkp-VM:~/Documents$ pidof -o %PPID -x “rclone-cron.sh”
bk@bkp-VM:~/Documents$ pidof
bk@bkp-VM:~/Documents$

File locks are nice because they can be used as semaphores to coordinate multiple scripts, but if you don't think your setup will become more complex in the future, then pidof should work just fine.

Since you don't really need the actual pid, you can just use a check for the exit code of pidof instead of checking for output, for example:

pidof -x "$(basename $0)" >/dev/null 2>&1
if [[ $? == 0 ]]; then 
    echo "$(basename $0) is already running"
    exit 1
else
   echo "$(basename $0) is not running"
fi 

The danger of dupe_script=$(ps -ef | grep "rclone" | grep -v grep | wc -l) is that it will exit if any process named rclone is running.

You also might want to consider adding a check to make sure that your script doesn't hang, for example, https://healthchecks.io/, if you are running it as a cron job.

After playing around a bit, this worked for me:

if [[ $(pidof -x "$0" | wc -w) -gt 2 ]]; then
    echo "$0 already running"
    exit
fi

The aforementioned exit-code solution always died because it found it's own PID :upside_down_face:

Thanks for the post of a working solution @chezmojo . I would just like to add the equivalent for those of us who run their copy/sync's inside of screen sessions in order to capture the transfer progress on stdout via piping to tee (in addition to rclone --log-file).

SCREEN_NAME="rclone-sync123"
export SCREEN_NAME

#exit if screen session running
if ! [[ $(screen -S "$SCREEN_NAME" -Q select .) ]]; then
     echo "screen is running, exiting..."
     exit
fi

Entire script:

#!/usr/bin/env bash
# RClone Config file
RCLONE_CONFIG=/root/.config/rclone/rclone.conf
SCREEN_NAME=$(basename "$0" | cut -d '.' -f 1)
export RCLONE_CONFIG
export SCREEN_NAME

#exit if running
if ! [[ $(screen -S "$SCREEN_NAME" -Q select .) ]]; then
	echo "$SCREEN_NAME is running, exiting..."
     exit 1
fi

usage()
{
    echo "usage: rclone-sync-p0ds0smb.sh [-b | --bandwidth specify bandwidth as an integer | -h | --help shows this message]"
}

if [ "$1" != "" ]; then

BANDWIDTH=

while [ "$1" != "" ]; do
    case $1 in
        -b | --bandwidth )	shift
                                BANDWIDTH=$1
				export BANDWIDTH
                                ;;
        -h | --help )           usage
                                exit 0
                                ;;
        * )                     usage
                                exit 1
    esac
    shift
done

screen -dmS $SCREEN_NAME bash -c 'rclone sync --bwlimit "$BANDWIDTH"M --progress --transfers 8 --checkers 10 --tpslimit 10 --update --filter-from $HOME/.config/rclone/filter-p0ds0smb.txt --drive-acknowledge-abuse --drive-use-trash=true --log-level INFO --delete-during --log-file $HOME/.config/rclone/log/upload-gcrypt-gavilan.log /mnt/pool0/p0ds0smb gcrypt-gavilan:/p0ds0smb 2>&1 | tee $HOME/.config/rclone/log/gcrypt-gavilan.log'
else
screen -dmS $SCREEN_NAME bash -c 'rclone sync --bwlimit 4M --progress --transfers 8 --checkers 10 --tpslimit 10 --update --filter-from $HOME/.config/rclone/filter-p0ds0smb.txt --drive-acknowledge-abuse --drive-use-trash=true --log-level INFO --delete-during --log-file $HOME/.config/rclone/log/upload-gcrypt-gavilan.log /mnt/pool0/p0ds0smb gcrypt-gavilan:/p0ds0smb 2>&1 | tee $HOME/.config/rclone/log/gcrypt-gavilan.log'
fi