A solution that does not require additional tools would be prefered.
15 Answers
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir -- "$LOCKDIR"
then
# Do important, exclusive stuff
if rmdir -- "$LOCKDIR"
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.

- 544,893
-
1I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim that
mkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true) – Tobias Kienzler Sep 18 '12 at 13:08 -
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races. – Sep 18 '12 at 13:37
-
1Ok, as Ihunath stated, the lockdir would most likely be in
/tmp
which is usually not NFS shared, so that should be fine. – Tobias Kienzler Sep 19 '12 at 08:33 -
1I would use
rm -rf
to remove the lock directory.rmdir
will fail if someone (not necessarily you) managed to add a file to the directory. – chepner Sep 22 '12 at 04:32
To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:
#Remove the lock directory
cleanup() {
if rmdir -- "$LOCKDIR"; then
echo "Finished"
else
echo >&2 "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir -- "$LOCKDIR"; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C) as well in some shells
#including bash.
trap "cleanup" EXIT
echo "Acquired lock, running"
# Processing starts here
else
echo >&2 "Could not create lock directory '$LOCKDIR'"
exit 1
fi

- 544,893

- 341
-
Alternatively,
if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
. – Kusalananda Feb 22 '18 at 13:19 -
1It's worth pointing out that the
trap
definition must remain at the global scope of the script. Moving that mkdir block inside a function will result incleanup: command not found
. (I learned this the hard way) – Dale C. Anderson Dec 02 '20 at 20:13
One other way to make sure a single instance of bash script runs:
#! /bin/bash -
Check if another instance of script is running
if pidof -o %PPID -x -- "$0" >/dev/null; then
printf >&2 '%s\n' "ERROR: Script $0 already running"
exit 1
fi
...
pidof -o %PPID -x -- "$0"
gets the PID of the existing script¹ if it's already running or exits with error code 1 if no other script is running
¹ Well, any process with the same name...

- 544,893

- 261
-
I prefer the simplicity of this solution. “Simplicity is the ultimate sophistication.” -- Leonardo da Vinci – harleygolfguy Nov 22 '20 at 23:34
-
That doesn't work if the script is run as
./thatscript
the first time and/path/to/thatscript
the second time. It's generally a bad idea to rely on process names as those can be arbitrarily set to any value by anyone. – Stéphane Chazelas Jul 16 '22 at 08:52 -
1@StéphaneChazelas Regarding process name changing - noted! Could be avoided by using
basename $0
instead of$0
. Still can never be safe, as you mentioned, but at least that function can be called from different paths. – lonix Jul 16 '22 at 09:10 -
@lonix rather
"$(basename -- "$0")"
(assuming$0
doesn't end in newline characters) or"${0##*/}"
(or$0:t
in zsh). Remember expansions must be quoted in sh/bash. – Stéphane Chazelas Jul 16 '22 at 09:13 -
I'm not sure whether this is atomic or not... If two instances are run simultaneously, the two of them could execute, or none of them. This could potentially be a race condition. – ShellCode Feb 26 '23 at 22:09
Although you've asked for a solution without additional tools, this is my favourite way using flock
:
#!/bin/sh
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
echo "servus!"
sleep 10
This comes from the examples section of man flock
, which further explains:
This is useful boilerplate code for shell scripts. Put it at the top of the shell script you want to lock and it'll automatically lock itself on the first run. If the env var $FLOCKER is not set to the shell script that is being run, then execute flock and grab an exclusive non-blocking lock (using the script itself as the lock file) before re-execing itself with the right arguments. It also sets the FLOCKER env var to the right value so it doesn't run again.
Points to consider:
- Requires
flock
, the example script terminates with an error if it can't be found - Needs no extra lock file
- May not work if the script is on NFS (see https://serverfault.com/questions/66919/file-locks-on-an-nfs)
Update: If your script may get called through different paths (e.g. through its absolute or relative path) or in other words if $0
differs in parallel invocations then the above doesn't work properly. Use a unique environment variable (FLOCK_HAFBX
in the example) instead:
[ -z "$FLOCK_HAFBX" ] && exec env FLOCK_HAFBX=1 flock -en "$0" "$0" "$@" || :
The environment variable should be unique so nested flocked scripts work as expected.

- 211
This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps
enough?
#!/bin/bash
me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;
# do stuff below this comment

- 242,166
-
1
-
4I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden by
grep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858. – Naktibalda Feb 22 '18 at 11:30 -
2
-
1@Naktibalda good catch, thanks! You could also fix it with
grep -wv "^$$"
(see edit). – terdon Feb 22 '18 at 12:38 -
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces. – Naktibalda Mar 08 '18 at 16:50
-
For
busybox
reducedash
shell, anotherps | awk | grep
solution is available at https://stackoverflow.com/questions/52141287/how-to-ensure-that-only-one-instance-of-a-busybox-shell-script-is-running-at-a-t – Pro Backup Sep 03 '18 at 01:01 -
5With this solution, if two instances of the same script are started at the same time, there's a chance that they will "see" each others and both will terminate. It may not be a problem, but it also may be, just be aware of it. – flagg19 Jan 25 '20 at 11:19
This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock
to handle the lock.
script=`realpath $0` # get absolute path to the script itself
exec 6< "$script" # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; } # lock file descriptor 6 OR show error message if script is already running
echo "Run your single instance code here"
The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.

- 199
-
1You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doing
exec 6< "$SCRIPT"
. – Scott - Слава Україні Nov 02 '18 at 06:01 -
@Scott I've changed the code according your suggestions. Many thanks. – John Doe Nov 02 '18 at 06:38
-
I suggest using lower-case variable names here, e.g.
script=...
. It reduces the risk of colliding with built-in shell variables such as$PATH
. Which I've never done... (cough) – EdwardTeach Jun 28 '20 at 20:05 -
I would use a lock file, as mentioned by Marco
#!/bin/bash
# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit
# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit
# Do stuff
sleep 60
# Remove lock file
rm /tmp/lock.file

- 1,216
-
1(I think you forgot to create the lock file) What about race conditions? – Tobias Kienzler Sep 18 '12 at 11:28
-
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare. – nsg Sep 18 '12 at 11:32
-
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe using
lsof $0
isn't bad, either? – Tobias Kienzler Sep 18 '12 at 11:34 -
You can diminish the race condition by writing your
$$
in the lock file. Thensleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools. – manatwork Sep 18 '12 at 11:41 -
1I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions. – nsg Sep 18 '12 at 11:45
-
-
here's a lockfile answer at SO. Maybe in combination with the
mkdir
answer, race conditions and stalenes checks can be included... – Tobias Kienzler Sep 18 '12 at 11:49 -
Does it make sense to add a trap for
SIGINT
,SIGTERM
etc to make sure your lock file gets cleaned up in case somebody kills the current run (explicitly with Ctrl-C or during shutdown)? – Axel Knauf Sep 18 '12 at 13:30 -
Axel: Sure, sounds like a good idea. I Also recommend Bruce Ediger's mkdir based answer. – nsg Sep 19 '12 at 08:13
-
I've always wondered why people add the "x" in string comparisons like this, when omitting it like this should work fine: [ "$(cat /tmp/lock.file)" == "$$" ] || exit – Brent212 Oct 18 '22 at 18:13
If you want to make sure that only one instance of your script is running take a look at:
Lock your script (against parallel run)
Otherwise you can check ps
or invoke lsof <full-path-of-your-script>
, since i wouldn't call them additional tools.
Supplement:
actually i thought of doing it like this:
for LINE in `lsof -c <your_script> -F p`; do
if [ $$ -gt ${LINE#?} ] ; then
echo "'$0' is already running" 1>&2
exit 1;
fi
done
this ensures that only the process with the lowest pid
keeps on running even if you fork-and-exec several instances of <your_script>
simultaneously.

- 2,234
-
1Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like
[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions? – Tobias Kienzler Sep 18 '12 at 11:30 -
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer. – user1146332 Sep 18 '12 at 12:52
I am using cksum to check my script is truly running single instance, even I change filename & file path.
I am not using trap & lock file, because if my server suddenly down, I need to remove manually lock file after server goes up.
Note: #!/bin/bash in first line is required for grep ps
#!/bin/bash
checkinstance(){
nprog=0
mysum=$(cksum $0|awk '{print $1}')
for i in `ps -ef |grep /bin/bash|awk '{print $2}'`;do
proc=$(ls -lha /proc/$i/exe 2> /dev/null|grep bash)
if [[ $? -eq 0 ]];then
cmd=$(strings /proc/$i/cmdline|grep -v bash)
if [[ $? -eq 0 ]];then
fsum=$(cksum /proc/$i/cwd/$cmd|awk '{print $1}')
if [[ $mysum -eq $fsum ]];then
nprog=$(($nprog+1))
fi
fi
fi
done
if [[ $nprog -gt 1 ]];then
echo $0 is already running.
exit
fi
}
checkinstance
#--- run your script bellow
echo pass
while true;do sleep 1000;done
Or you can hardcoded cksum inside your script, so you no worry again if you want to change filename, path, or content of your script.
#!/bin/bash
mysum=1174212411
checkinstance(){
nprog=0
for i in `ps -ef |grep /bin/bash|awk '{print $2}'`;do
proc=$(ls -lha /proc/$i/exe 2> /dev/null|grep bash)
if [[ $? -eq 0 ]];then
cmd=$(strings /proc/$i/cmdline|grep -v bash)
if [[ $? -eq 0 ]];then
fsum=$(grep mysum /proc/$i/cwd/$cmd|head -1|awk -F= '{print $2}')
if [[ $mysum -eq $fsum ]];then
nprog=$(($nprog+1))
fi
fi
fi
done
if [[ $nprog -gt 1 ]];then
echo $0 is already running.
exit
fi
}
checkinstance
#--- run your script bellow
echo pass
while true;do sleep 1000;done

- 19
-
1Please explain exactly how hardcoding the checksum is a good idea. – Scott - Слава Україні May 24 '19 at 00:14
-
not hardcoding checksum, its only create identity key of your script, when another instance will running, it will check other shell script process and cat the file first, if your identity key is on that file, so its mean your instance already running. – arputra May 24 '19 at 06:35
-
OK; please [edit] your answer to explain that. And, in the future, please don’t post multiple 30-line long blocks of code that look like they’re (almost) identical without *saying* and *explaining* how they’re different. And don’t say things like “you can hardcoded [sic] cksum inside your script”, and don’t continue to use variable names
mysum
andfsum
, when you’re not talking about a checksum any more. – Scott - Слава Україні May 24 '19 at 07:08 -
Looks interesting, thanks! And welcome to unix.stackexchange :) – Tobias Kienzler May 24 '19 at 08:08
This handy package does what you're looking for.
https://github.com/krezreb/singleton
once installed, just prefix your command with singleton LOCKNAME
e.g. singleton LOCKNAME PROGRAM ARGS...
Another approach not mentioned here that does not use flock
is to rely on the fact that creating a link is atomic.
Given this script:
#!/bin/bash
touch /var/tmp/singleton.$$.lock
if ( link /var/tmp/singleton.$$.lock /var/tmp/singleton.lock 2>/dev/null ) ; then
echo 'Lock Acquired By: ' $$
sleep 2
echo 'Lock Released By: ' $$
rm /var/tmp/singleton.lock /var/tmp/singleton.$$.lock
else
echo 'Failed Lock Acquistion Attempt By: ' $$
rm /var/tmp/singleton.$$.lock
fi
When you try to run multiple instances simultaneously, only one of them acquires the "lock":
§ for i in {1..3}; do (./singleton.sh &) ; done ; sleep 3
Lock Acquired By: 3816
Failed Lock Acquistion Attempt By: 3820
Failed Lock Acquistion Attempt By: 3818
Lock Released By: 3816
You can modify this script to add the appropriate trap
statements to make it robust in the event that your script does not exit gracefully.
The basic idea is that when two instances of the script run, they try to create a link at the same path (i.e., /var/tmp/singleton.lock
) that points to something, but only one of them will succeed and the other one will get the error code for the error:
ln: failed to create hard link '/var/tmp/singleton.lock': File exists
In this script, the something happens to be an empty file that has the PID of the executing script. You could use other schemes for what something could be but the important bit is that the two instances of the script try to create a link at the same path.

- 221
-
Is god to release lock in trap (as in https://unix.stackexchange.com/a/180028/18674). To catch stop by external signal – mmv-ru Jun 11 '22 at 23:41
You can use this: https://github.com/sayanarijit/pidlock
sudo pip install -U pidlock
pidlock -n sleepy_script -c 'sleep 10'

- 101
My code to you
#!/bin/bash
script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}
function executing {
echo "'${script_file}' already executing"
exit 1
}
(
flock -n 9 || executing
sleep 10
) 9> /var/lock/${lock_file}
Based on man flock
, improving only:
- the name of the lock file, to be based on the full name of the script
- the message
executing
Where I put here the sleep 10
, you can put all the main script.
Easiest one liner script rather than writing complex PID or lock statements
flock -xn LOCKFILE.lck -c SCRIPT.SH
where x denotes exclusive lock and n denotes non-blocking which fails rather than wait for the lock to be released. c runs the script that you want to launch.

- 161
I did it like this:
#!/bin/sh
GET_BASENAME=$( basename $0 )
GET_PIDS=$( pgrep -f $GET_BASENAME )
GET_MY_PATH=$( realpath "$0" )
for X1 in $( seq 1 1 $( echo -n "$GET_PIDS" | wc -l ) ); do
GET_PID=$( echo -n "$GET_PIDS" | head -n "$X1" | tail -n 1 )
GET_PID_PATH=$( readlink "/proc/$GET_PID/fd/10" )
if [ "$GET_MY_PATH" = "$GET_PID_PATH" ]; then
echo "The service is already running in process $GET_PID!"
exit
fi
done
My example script
while true; do
echo -n "."
sleep 1
done

- 1
-
This looks to be more error-prone than the other answers provided (I personally use something similar to procedures involving flock) and it also assumes that you have write access to /run/. The /run/ access can be worked around but even assuming the existence of a unique and universally accessible directory the method does not always work. Unique & accessible is not workable for packages under systemd with service-defined tmp directories. – doneal24 Feb 17 '24 at 20:23
ln -s my.pid .lock
will claim the lock (followed byecho $$ > my.pid
) and on failure can check whether the PID stored in.lock
is really an active instance of the script – Tobias Kienzler Sep 18 '12 at 15:22