88

Most shells provide functions like && and ; to chain the execution of commands in certain ways. But what if a command is already running, can I still somehow add another command to be executed depending on the result of the first one?

Say I ran

$ /bin/myprog
some output...

but I really wanted /bin/myprog && /usr/bin/mycleanup. I can't kill myprog and restart everything because too much time would be lost. I can Ctrl+Z it and fg/bg if necessary. Does this allow me to chain in another command?

I'm mostly interested in bash, but answers for all common shells are welcome!

Timo
  • 6,332
us2012
  • 953

4 Answers4

118

You should be able to do this in the same shell you're in with the wait command:

$ sleep 30 &
[1] 17440

$ wait 17440 && echo hi

...30 seconds later...
[1]+  Done                    sleep 30
hi

excerpt from Bash man page

wait [n ...]
     Wait for each specified process and return its termination status. Each n 
     may be a process ID or a job specification; if a job spec is given,  all 
     processes  in that job's pipeline are waited for.  If n is not given, all 
     currently active child processes are waited for, and the return status is 
     zero.  If n specifies a non-existent process or job, the return status is 
     127.  Otherwise, the return status is the exit status of the last process 
     or job waited for.
Whymarrh
  • 175
slm
  • 369,824
  • Waiting on the command should do the job. – BillThor Nov 13 '13 at 00:41
  • There are a number of short forms for entering the PID of the backgrounded process such as %, %1 and $!. It is important to supply the PID or the second command will always run. – BillThor Nov 13 '13 at 01:06
  • @BillThor - are you just qualifying the answer or telling me this? – slm Nov 13 '13 at 01:07
  • I am qualifying the answer. A plain wait will fail to provide the desired result. It is common to use the short forms as they are less prone to typos. – BillThor Nov 13 '13 at 01:18
  • 2
    Let me be the first to congratulate you on your shiny new badge :) – terdon Jul 02 '14 at 18:00
63

fg returns with the exit code from the program it resumes. You can therefore suspend your program with ^Z and then use fg && ... to resume it.

$ /bin/myprog
some output...
^Z
[1]+ Stopped              /bin/myprog
$ fg && /usr/bin/mycleanup
John Kugelman
  • 2,057
  • 2
  • 16
  • 23
  • if you suspend again before it ends and do the same thing with a different command, does the initial chaining of mycleanup get replaced? – Burhan Ali Dec 08 '13 at 10:44
  • 3
    @BurhanAli Suspending myprog for the second time causes fg to terminate with the exit code 20 -- which is non-zero, so the chained mycleanup command isn't executed. – n.st Jan 04 '14 at 04:00
1

Not sure if what you're asking for is possible, but if you still have the shell you started the program from, you can always check $? for the last process' exit status:

$ /bin/myprog
some output...
$ if [ $? -ne 0 ];then echo "non-zero exit status";else echo "0 exit status";fi
Joseph R.
  • 39,549
1

If the job is in the foreground, either of these commands would have the same behavior as you expect.

[ $? -eq 0 ] && prog2
(( $? )) || prog2

NOTE: $? will contain the return status of the running program when it exits.

This is explicitly stating what the shell would do if you had originally entered the command.

prog1 && prog2

If the first command is not reading from stdin and is running in the foreground, the new command can be entered while the first command is running. The shell will read and execute it when the first command executes. If the command will run in the background, it is unlikely it is reading stdin.

EDIT: Putting the job in the background and using the WAIT command could also be used. This must be done with care if other jobs have also been run in the background. A job specification is required to have the WAIT command return the status of the job waited on.

BillThor
  • 8,965
  • This will work, but he'll have to run this manually afterwards, he want to run this automatically when the command finishes. – slm Nov 11 '13 at 19:17
  • 4
    @slm As long as the command is not reading stdin, the new command can be entered while the first command is running. The shell will read it as soon as the first command is done. – BillThor Nov 11 '13 at 19:19
  • If he does anything in the shell after it's been backgrounded it's likely hosed. At least according to the prelim. testing I've been doing thus far! – slm Nov 11 '13 at 19:22
  • Don't get me wrong, I want this to work too, but I think there has to be a better way than the 3 solutions you guys have posted thus far! – slm Nov 11 '13 at 19:24
  • 2
    @BillThor I think you should add that comment to your answer. – Joseph R. Nov 11 '13 at 19:25
  • @slm From his request it seems the command is running in the foreground. If it had been backgrounded, then waiting on the proceess and checking its return status could be used. It would be easier to forground it. – BillThor Nov 11 '13 at 19:26
  • Check out the wait command, I think that's what we want here! – slm Nov 11 '13 at 19:30
  • @slm return status on the backgrounded command will only be available if no other command is run in the background after it. The variable containing the return status is not $?. – BillThor Nov 11 '13 at 19:33
  • Bill which thing is your comment directed to? – slm Nov 11 '13 at 19:35
  • @slm using wait as a solution. – BillThor Nov 11 '13 at 19:37
  • 2
    Yes, using wait isn't ideal either. The advantage, IMO, is that it gives us a clearer API that we're dealing with. Typing more commands on the command line after something is running seemed a little hacky to me. wait still suffers from not getting a direct link to the running PID, wrt the return status. This seems to be an issue more w/ how Bash implements things though: http://stackoverflow.com/questions/356100/how-to-wait-in-bash-for-several-subprocesses-to-finish-and-return-exit-code-0. So this might be as good as it gets. – slm Nov 11 '13 at 19:43