2
inotifywait -q -m -e close_write,create --recursive ../orgmode-parse-print | 
while read -r filename event; do
    echo $filename;
    echo $event
    sleep infinity;
done

The problem with the above is it 'sleeps' forever and never terminates. How can I terminate or restart the process (essentially the contents of the while loop (including the sleep)) if another event occurs?

In other words, do the command, but terminate it (interrupt it I suppose) and start again if a file has been modified.

I'm using sleep as an example here - the actual process being run is a long running process.

2 Answers2

1

This works for me:

$ inotifywait -q -m -e close_write,create --recursive dir1/ | \
  ( 
    CNT=0; 
    while read -r filename event; do 
       echo "count: $CNT filename: $filename  event: $event"; ((CNT++)); 
       [ "$CNT" -eq 1 ] && exit; 
    done 
  )

Example

To start I made a sample directory structure to work with:

$ mkdir -p dir1/dir2/dir{3..5}

$ tree dir1/
dir1/
└── dir2
    ├── afile
    ├── dir3
    ├── dir4
    └── dir5

4 directories, 1 file

I then ran this to start watching the directory:

$ inotifywait -q -m -e close_write,create --recursive dir1/ | ( CNT=0; while read -r filename event; do echo "count: $CNT filename: $filename  event: $event"; ((CNT++)); [ "$CNT" -eq 1 ] && exit; done )

I then ran touch afile commands in this directory:

$ cd dir1/dir2
$ touch afile
$ touch afile

These resulted in this output from the inotifywait:

count: 0 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile

Once it gets to the 2nd 'event' it exits.

Problems and a better solution

One issue with the use of the subshell (...while ...) to the pipe is that we do not see the 2nd message from echo when the 2nd event occurs. No problem we can simply restructure things like this instead:

$ CNT=0; while read -r filename event; do echo "count: $CNT filename: $filename  event: $event"; ((CNT++)); [ "$CNT" -eq 2 ] && break; done < <(inotifywait -q -m -e close_write,create --recursive dir1/)
count: 0 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
count: 1 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
$

Expanded:

$ CNT=0; \
  while read -r filename event; do \
    echo "count: $CNT filename: $filename  event: $event"; ((CNT++)); \
    [ "$CNT" -eq 2 ] && break; \
  done < <(inotifywait -q -m -e close_write,create --recursive dir1/)
count: 0 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
count: 1 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
$

With a backgrounded task

If you have a task that's going to block inside the while ... loop you can introduce a trap to kill it, and then background it to allow the while ... loop to process input from the inotifywait.

Example:

$ cat ./notifier.bash
#!/bin/bash

trap 'kill $(jobs -p)' EXIT;

CNT=0
while read -r filename event; do
  sleep 1000 &
  echo "count: $CNT filename: $filename  event: $event"; ((CNT++))
  [ "$CNT" -eq 2 ] && break
done < <(inotifywait -q -m -e close_write,create --recursive dir1/)

In action:

$ ./notifier.bash
count: 0 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
count: 1 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
./notifier.bash: line 1: kill: (30301) - No such process

And there's no remnants of the backgrounded sleep procs:

$ ps -eaf|grep [s]leep
$

Once last adjustment regarding the trap that you may have noticed. When we do that kill $(jobs -p) it throws garbage to the screen like this, sometimes:

./notifier.bash: line 1: kill: (30301) - No such process

We can clean this up like this:

 trap 'kill $(jobs -p) > /dev/null 2>&1' EXIT;

Now when we run it:

$ ./notifier.bash
count: 0 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
count: 1 filename: dir1/dir2/  event: CLOSE_WRITE,CLOSE afile
$

References

slm
  • 369,824
  • I'm sorry I don't think I explained it well. The problem with this solution is it won't terminate if the 'command' (sleep for example) is a long running command. Unless I have misunderstood? – Chris Stryczynski Jul 15 '18 at 17:23
  • @ChrisStryczynski - If you background your long running task this should do it, but yes please update your question, it's not very clear with what you really want then. – slm Jul 15 '18 at 17:25
  • @ChrisStryczynski - see latest updates. – slm Jul 15 '18 at 17:35
0

Supposing that you just want to terminate whatever your code is doing when a new interruption is received the following bash script works fine for that:

#!/bin/bash

my_function () {
   sleep infinity
}

declare -A cache
inotifywait -q -m -e close_write,create --recursive ../orgmode-parse-print |
while read -r filename event; do
    if [ "${cache[pid_my_function]}" ]; then kill "${cache[pid_my_function]}"; fi
    echo $filename
    echo $event
    my_function &
    cache[pid_my_function]=$!
done

Basically, the script puts the long process (that's represented by sleep infinity) inside a function, so it can run as an independent process when the function with & is called. The command $! prints the process number inside a variable so it can be killed later when a new interruption arrives...


OBS: This script is killing what your code is doing when a new interruption is received, but I'm not sure if you really want to do that. You could just run each interruption call as a separated process using a function with & instead, without necessarily killing the process, so you'd be sure that your script is executed on all interruptions...