Solved: Trying to create Rclone cron but it doesn't delete itself

Okay, so I want to sync files on a cron job every 2 minutes, but make it not sync if it’s already running.

I followed the advice on this thread: rClone Scheduling

However, unlike theirs, mine doesn’t seem to kill the already running processes, resulting in huge system loads and multiple instances of rclone sync…

My script is:

#!/bin/bash
if pidof -o %PPID -x “rclone-cron.sh”; then
exit 1
fi
rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
exit

Anyone have expiration with this?

I really haven’t used pidof before so not familiar with it.

I check for my script name and if running, I drop out.

#!/bin/bash
# RClone Config file
RCLONE_CONFIG=/opt/rclone/rclone.conf
export RCLONE_CONFIG

if pgrep -x "upload_cloud" > /dev/null
then
# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --drive-chunk-size 32M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs
fi

When it runs, it doesn’t run under the script name, rather just the command itself for some reason…

23127 zcleaver   20   0  192M 89284 15440 S 28.3  1.1  3:07.42 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
20735 zcleaver   20   0  192M 88868 15352 S 21.9  1.1  3:18.59 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
21776 zcleaver   20   0  192M 90120 15448 S 21.2  1.1  3:13.46 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
23134 zcleaver   20   0  192M 89284 15440 D  7.1  1.1  0:11.57 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
23175 zcleaver   20   0  192M 89284 15440 D  6.4  1.1  0:12.80 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
23128 zcleaver   20   0  192M 89284 15440 S  5.7  1.1  0:30.26 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
20842 zcleaver   20   0  192M 88868 15352 S  5.0  1.1  0:11.35 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
21779 zcleaver   20   0  192M 90120 15448 D  5.0  1.1  0:13.67 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
22037 zcleaver   20   0  192M 90120 15448 S  5.0  1.1  0:10.26 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
23133 zcleaver   20   0  192M 89284 15440 S  5.0  1.1  0:10.12 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
20744 zcleaver   20   0  192M 88868 15352 S  4.2  1.1  0:31.67 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
20753 zcleaver   20   0  192M 88868 15352 D  4.2  1.1  0:10.90 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
20825 zcleaver   20   0  192M 88868 15352 D  4.2  1.1  0:09.45 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync
21777 zcleaver   20   0  192M 90120 15448 S  4.2  1.1  0:30.97 rclone sync oldseed:files/sync /opt/syncfolder/FTPSync

With such script there will be two processes running:

/bin/bash rclone-cron.sh

and

rclone sync oldseed:files/sync /opt/syncfolder/FTPSync

pidof may be ignoring the script (that is what -x is for)???

I usually do things a bit more complex here:

I put the PID of the script process in /var/run/myscript.pid
On entry, If I find this file, read the and check if the process is still running in /proc/. If either is false, I move on. Otherwise It’s already running.

I find this approach more reliable.

Oof as I had a bit of a logic error in mine.

pgrep checking for the scripts results in always matching as the script is running to check if it’s running.

I’ll fix that in mine as well, but you could check for rclone sync, create a lock file, create a pid file. I’ll fix and write something once I get home if someone doesn’t beat me to it.

Sorry, I’m very new to all of this scripting stuff… So what would I need in my script to ensure these don’t run more than once at a time?

While your solution was slightly helpful, and likely much more helpful to novices in these forums, I am a total noob (no shame lol) so I was unable to fully understand the processes behind it… I need some extra short bus help… :frowning:

How would I go about doing this…? (Noob explaination with help on commands and such please! <3 )

Is that the name of your script?

I finally got testing more of what you were doing and that looks like it should work.

Yes this is the correct name for the script. Here is where I am calling it from…
/opt/syncfolder/rclone-cron.sh

The script works manually, but doesn’t stop itself from executing if it’s already running… :frowning:

Is it running now? Can you test by checking if the process is running via a ps command and test a pidof command and share the output?

[felix@gemini scripts]$ pidof -x testme.sh
11440
[felix@gemini scripts]$ pidof -o %PPID -x testme.sh
11440
[felix@gemini scripts]$ pidof -o %PPID -x "testme.sh"
11440
[felix@gemini scripts]$ ps -ef | grep testme.sh
felix    11440  8046  0 18:53 pts/1    00:00:00 /usr/bin/bash ./testme.sh
felix    11527  9121  0 18:54 pts/2    00:00:00 grep testme.sh

I tested what you have in your script and it matches on my system for my test script.

Here is the output…

zcleaver@ubuntu:~$ pidof -x /opt/syncfolder/rclone-cron.sh
5039
zcleaver@ubuntu:~$ pidof -o %PPID -x /opt/syncfolder/rclone-cron.sh
5039
zcleaver@ubuntu:~$ pidof -o %PPID -x /opt/syncfolder/rclone-cron.sh
5039
zcleaver@ubuntu:~$ ps -ef | grep /opt/syncfolder/rclone-cron.sh
zcleaver  5039 23962  0 17:57 pts/3    00:00:00 /bin/bash /opt/syncfolder/rclone-cron.sh
zcleaver  5121 24019  0 17:58 pts/5    00:00:00 grep --color=auto /opt/syncfolder/rclone-cron.sh

So if you run the script again, does it exit out properly?

yes… Seems to be working properly now… No idea where I went wrong lol…

I know you got it working @coolcat97, but another way to do this would be to use systemd units (if you’re on a systemd system) - for example, I have a script to generate temporary firefox profiles, and it uses systemd-run to start the process. This means that activating it again just won’t work, which simplifies this whole process a lot.

The process would likely look similar with launchd on macOS and Windows’ service manager or whatever on Windows. That’s kind of what service managers are meant to do, and recurrent tasks that must not be run in parallel are a prime example of where a service manager can help.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.