Bash 4.4.19(1)-release
I have below a simple script which is the basis for a logging app. For various reasons I had to use process substitution.
The runner
is the heart of the app and since process substitution is asynchronous, I have managed to get it to a good degree of coherence by the while
loop. It works perfectly.
Unfortunately I found a case where it will not work: when I execute 'bash <filename> <function>
'
So we need 2 files to reproduce.
Requirement:
- Why does this happen?
- How to modify my while loop to accommodate similar cases?
Simplified script is:
test.sh
#!/bin/bash
2sub() {
local in=$(cat); echo -e "$in";
}
runner () {
"${@}" 1> >(2sub)
while [ -e /proc/$! ]; do sleep 0.1; done # <<< LOOP WAIT FOR $!
}
remotesub() {
bash ./test2.sh remotesub2
}
echo -e "running\n";
runner bash ./test2.sh remotesub2 # LOOPS
# runner remotesub # A POSSIBLE BYPASS/SOLUTION? But why?
echo -e "done!\n"
test2.sh
remotesub2() {
echo -e "'${BASH_VERSION}'"
return 0
}
"$@"
Bypass:
As you can see from the script, there is a bypass for the problem, by including bash <filename> <function>
inside a function, and passing the function to the runner
. Why this works and not the direct way, I am sure somebody here knows.
Please shed some light on this issue and if there are some better ways to do the waiting loop in order to cover these cases.
Solution:
The best solution is what mosvy suggested. Thank you.
Using { "${@}"; }
removes the need to package the commands in separate small functions which is a pain. Also after many hours of testing with my larger code, I came to the conclusion that careful killing of sub-processes makes this while [ -e /proc/$! ]; do sleep 0.1; done
unnecessary. That line was replaced with wait $!;