54

I would like to run something like this:

bash -c "some_program with its arguments"

but to have an interactive bash keep running after some_program ends.

I'm sure -c is not a good way as man bash seys:

An interactive shell is one started without non-option arguments and without the -c option

So how to do this ?

The main goal is described here

NOTE

  • I need to terminate some_program from time to time
  • I don't want to put it to the background
  • I want to stay on the bash then to do something else
  • I want to be able to run the program again
pawel7318
  • 2,000
  • 3
  • 16
  • 15
  • 1
    if the goal is that complex you should also explain it here. but that's just an advise – Kiwy Apr 04 '14 at 13:32
  • 2
    I tried to described as short as possible here and very precisely. But I didn't expect that most people will not focus on the details and will try to propose something else. I'll put some notes to make it clear. – pawel7318 Apr 04 '14 at 13:37
  • What are the terminal escapes for in that other question? The goal you describe is doable, but it will require handling i/o. Your average subshell is not going to easily handle terminal escaped i/o through a regular file. You should look into pty. – mikeserv Apr 04 '14 at 13:43
  • The only reason it works in my answer below, by the way, is because I steal the terminal. Eventually, though, the parent process is likely to take it back - and then you're looking at 4kb buffers. – mikeserv Apr 04 '14 at 13:52
  • Why don't you want to put a program in background? Start it in the background, do some bash, put it on foreground with fg – Bernhard Apr 04 '14 at 13:54
  • I need to restart it. – pawel7318 Apr 04 '14 at 13:58
  • possible same as http://stackoverflow.com/questions/7120426/invoke-bash-run-commands-inside-new-shell-then-give-control-back-to-user – Ciro Santilli OurBigBook.com Mar 22 '16 at 10:36

8 Answers8

37

Here's a shorter solution which accomplishes what you want, but might not make sense unless you understand the problem and how bash works:

bash -i <<< 'some_program with its arguments; exec </dev/tty'

This will launch a bash shell, start some_program, and after some_program exits, you'll be dropped to an interactive shell.

Basically what we're doing is feeding bash a string on its STDIN. The string tells bash to launch some_program, and then run exec </dev/tty after. The exec </dev/tty tells bash to switch STDIN from that string we gave it to /dev/tty instead, which makes it become interactive.

The -i is because when bash starts up, it checks to see if STDIN is a tty, and when bash starts it's not. But later it will be, so we force bash into interactive mode.


Another solution

Another idea I thought of that would be very portable would be to add the following to the very end of your ~/.bashrc file.

if [[ -n "START_COMMAND" ]]; then
  start_command="$START_COMMAND"
  unset START_COMMAND
  eval "$start_command"
fi

Then when you want to launch a shell with a command first, just do:

START_COMMAND='some_program with its arguments' bash

Explanation:

Most of this should be obvious, but the reson for the variable name changing stuff is so that we can localize the variable. Since $START_COMMAND is an exported variable, it will be inherited by any children of the shell, and if another bash shell is one of those children, it'll run the command again. So we assign the value to a new unexported variable ($start_command) and delete the old one.

phemmer
  • 71,831
  • Blow up if there's more than one what? One command? no, should still work. You are right about the portability though. There are 2 factors there, the <<< isn't POSIX, but you could echo "string here" | bash -i instead. Then there is /dev/tty which is a linux thing. But you could dup the FD before launching bash, and then reopen STDIN, which looks similar to what you're doing, but I opted to keep things simple. – phemmer Apr 04 '14 at 21:15
  • ok ok just don't fight.This just works for me and do all I need. I don't care to much about POSIX and portability, I need it on my box and only there. I'll check mikeserv's answer as well but can't do this now. – pawel7318 Apr 04 '14 at 21:21
  • Patrick - please put this answer here and I'll be happy to accept it as I already tested it and it works like a charm. I'll accept mikeserv's answer here than as you both agree that it's the same approach but in POSIX spirit. – pawel7318 Apr 04 '14 at 21:29
  • 1
    Not fighting :-). I respect mikserv's answer. He went for compatibility, I went for simplicity. Both are completely valid. Sometimes I'll go for compatibility if it's not overly complex. In this case I didn't think it was worth it. – phemmer Apr 04 '14 at 21:50
  • @pawel7318 I'm not fighting - I'm curious. Internet people can't hurt my feelings - but they sure can teach. Patrick - But I meant more than one shell - in his other question he was talking about many different shells that he could jump to at will. How do you reconcile that with the single tty? Admittedly though, yours is simpler, but an initial comment the asker made on Kiwy's answer lead me to opt for portability. Ok - that's a lie - I'm also a POSIX snob so I wouldn't have anyway. Sorry. – mikeserv Apr 04 '14 at 22:00
  • 2
    @Patrick, /dev/tty is not a Linux thing but definitely POSIX. – jlliagre Apr 06 '14 at 01:54
  • 2
14
( exec sh -i 3<<SCRIPT 4<&0 <&3                                        ⏎
    echo "do this thing"
    echo "do that thing"
  exec  3>&- <&4
  SCRIPT
)

This is better done from a script though with exec $0. Or if one of those file descriptors directs to a terminal device that is not currently being used it will help - you've gotta remember, other processes wanna check that terminal, too.

And by the way, if your goal is, as I assume it is, to preserve the script's environment after executing it, you'd probably be a lot better served with :

. ./script

The shell's .dot and bash's source are not one and the same - the shell's .dot is POSIX specified as a special shell builtin and is therefore as close to being guaranteed as you can get, though this is by no means a guarantee it will be there...

Though the above should do as you expect with little issue. For instance, you can :

 ( exec sh -i 3<<SCRIPT 4<&0 <&3                                        ⏎
    echo "do this thing"
    echo "do that thing"
    $(cat /path/to/script)
    exec  3>&- <&4
    SCRIPT
 )

The shell will run your script and return you to the interactive prompt - so long as you avoid exiting the shell from your script, that is, or backgrounding your process - that'll link your i/o to /dev/null.

DEMO:

% printf 'echo "%s"\n' "These lines will print out as echo" \
    "statements run from my interactive shell." \
    "This will occur before I'm given the prompt." >|/tmp/script
% ( exec sh -i 3<<SCRIPT 4<&0 <&3
    echo "do this thing"
    echo "do that thing"
    $(cat /tmp/script)
    exec  3>&- <&4
SCRIPT
)
sh-4.3$ echo "do this thing"
    do this thing
sh-4.3$ echo "do that thing"
    do that thing
sh-4.3$ echo "These lines will print out as echo"
    These lines will print out as echo
sh-4.3$ echo "statements run from my interactive shell."
    statements run from my interactive shell.
sh-4.3$ echo "This will occur before I'm given the prompt."
    This will occur before I'm given the prompt.
sh-4.3$ exec  3>&- <&4
sh-4.3$

MANY JOBS

It's my opinion that you should get a little more familiar with the shell's built-in task management options. @Kiwy and @jillagre have both already touched on this in their answers, but it might warrant further detail. And I've already mentioned one POSIX-specified special shell built-in, but set, jobs, fg, and bg are a few more, and, as another answer demonstrates trap and kill are two more still.

If you're not already receiving instant notifications on the status of concurrently running backgrounded processes, it's because your current shell options are set to the POSIX-specified default of -m, but you can get these asynchronously with set -b instead:

% man set
    −b This option shall be supported if the implementation supports the
         User  Portability  Utilities  option. It shall cause the shell to
         notify the user asynchronously of background job completions. The
         following message is written to standard error:
             "[%d]%c %s%s\n", <job-number>, <current>, <status>, <job-name>

         where the fields shall be as follows:

         <current> The  character  '+' identifies the job that would be
                     used as a default for the fg or  bg  utilities;  this
                     job  can  also  be specified using the job_id "%+" or
                     "%%".  The character  '−'  identifies  the  job  that
                     would  become  the default if the current default job
                     were to exit; this job can also  be  specified  using
                     the  job_id  "%−".   For  other jobs, this field is a
                     <space>.  At most one job can be identified with  '+'
                     and  at  most one job can be identified with '−'.  If
                     there is any suspended  job,  then  the  current  job
                     shall  be  a suspended job. If there are at least two
                     suspended jobs, then the previous job also shall be a
   −m  This option shall be supported if the implementation supports the
         User Portability Utilities option. All jobs shall be run in their
         own  process groups. Immediately before the shell issues a prompt
         after completion of the background job, a message  reporting  the
         exit  status  of  the background job shall be written to standard
         error. If a foreground job stops, the shell shall write a message
         to  standard  error to that effect, formatted as described by the
         jobs utility. In addition, if a job  changes  status  other  than
         exiting  (for  example,  if  it  stops  for input or output or is
         stopped by a SIGSTOP signal), the shell  shall  write  a  similar
         message immediately prior to writing the next prompt. This option
         is enabled by default for interactive shells.

A very fundamental feature of Unix-based systems is their method of handling process signals. I once read an enlightening article on the subject that likens this process to Douglas Adams' description of the planet NowWhat:

"In The Hitchhiker's Guide to the Galaxy, Douglas Adams mentions an extremely dull planet, inhabited by a bunch of depressed humans and a certain breed of animals with sharp teeth which communicate with the humans by biting them very hard in the thighs. This is strikingly similar to UNIX, in which the kernel communicates with processes by sending paralyzing or deadly signals to them. Processes may intercept some of the signals, and try to adapt to the situation, but most of them don't."

This is referring to kill signals.

% kill -l 
> HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS

At least for me, the above quote answered a lot of questions. For instance, I'd always considered it very strange and not at all intuitive that if I wanted to monitor a dd process I had to kill it. After reading that it made sense.

I would say most of them don't try to adapt for good reason - it can be a far greater annoyance than it would be a boon to have a bunch of processes spamming your terminal with whatever information their developers thought might have been important to you.

Depending on your terminal configuration (which you can check with stty -a), CTRL+Z is likely set to forward a SIGTSTP to the current foreground process group leader, which is likely your shell, and which should also be configured by default to trap that signal and suspend your last command. Again, as the answers of @jillagre and @Kiwy together show, there's no stopping you from tailoring this functionality to your purpose as you prefer.

SCREEN JOBS

So to take advantage of these features it's expected that you first understand them and customize their handling to your own needs. For example, I've just found this screenrc on Github that includes screen key-bindings for SIGTSTP:

# hitting 'C-z C-z' will run Ctrl+Z (SIGTSTP, suspend as usual)
bind ^Z stuff ^Z

# hitting 'C-z z' will suspend the screen client
bind z suspend

That would make it a simple matter to suspend a process running as a child screen process or the screen child process itself as you wished.

And immediately afterward:

% fg  

OR:

% bg

Would foreground or background the process as you preferred. The jobs built-in can provide you a list of these at any time. Adding the -l operand will include pid details.

Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
mikeserv
  • 58,310
9

This should do the trick:

bash -c "some_program with its arguments;bash"

Edit:

Here is a new attempt following your update:

bash -c "
trap 'select wtd in bash restart exit; do [ \$wtd = restart ] && break || \$wtd ; done' 2
while true; do
    some_program with its arguments
done
"
  • I need to terminate some_program from time to time

Use ControlC, you'll be presented this small menu:

1) bash
2) restart
3) exit
  • I don't want to put it to the background

That's the case.

  • I want to stay on the bash then to do something else

Select the "bash" choice

  • I want to be able to run the program again

Select the "restart" choice

jlliagre
  • 61,204
  • not bad not bad.. but would be good as well to be able to go back to that last command. Any ideas for that ? Please check my another question here to see what for I need it. So maybe you'll suggest another approach – pawel7318 Apr 04 '14 at 11:21
  • This executes a subshell. I think the asker is trying to preserve script environment. – mikeserv Apr 04 '14 at 12:17
  • thank you jlliagre - nice attempt but not much usefull for me. I usually press ctrl+C and I expect it to just do what it suppose to do. Additional menu is just to much. – pawel7318 Apr 04 '14 at 21:25
  • @pawel7318 the behavior of CTRL+C is just a side-effect of your terminal's default configuration to interpret it as a SIGINT. You can modify that as you will with stty. – mikeserv Apr 04 '14 at 22:15
  • @pawel7318 to further clarify, SIGINT is trapped above with 2. – mikeserv Apr 04 '14 at 22:23
  • @pawel7318 Both of my answers were answering to your requirements at the time you express them. Ctrl-C is stopping "some_program" as you asked to. You wrote you want to be able to stop the program to run shell commands then have "some_program" to be run again. As there should be also a way to leave both that shell and the program execution, you can't have this behavior without some sort of menu. – jlliagre Apr 04 '14 at 22:37
5

You can do it by passing your script as an initialization file:

bash --init-file foo.script

Or you can pass it on on the command line:

bash --init-file <(echo "ls -al")

Note that --init-file was meant for reading system wide initialization files like /etc/bash.bashrc so you might want to 'source' these in your script.

jeroent
  • 151
2
$ bash --init-file <(echo 'some_program with its arguments')
$ bash --rcfile <(echo 'some_program with its arguments')

In case you can't use process substitution:

$ cat script
some_program with its arguments
$ bash --init-file script

With sh (dash, busybox):

$ ENV=script sh

Or:

$ bash -c 'some_program with its arguments; exec bash'
$ sh -c 'some_program with its arguments; exec sh'
x-yuri
  • 3,373
1

I don't really see the point of doing this, since you already go back to a shell after running this program. However, you could to this:

bash -c "some_program with its arguments; bash"

This will launch an interactive bash after the program ran.

mtak
  • 1,294
0

You could put the command in background to keep your current bash open:

some_program with its arguments &

To return to the running you could then use the fg command and ^+z to put it in the background again

Kiwy
  • 9,534
  • not this time. You assume I want to run this bash... from bash which is not the case. – pawel7318 Apr 04 '14 at 11:32
  • @pawel7318 if you explain a bit more we could gives you a better answer maybe also ? – Kiwy Apr 04 '14 at 11:38
  • Please check my another question here. – pawel7318 Apr 04 '14 at 11:51
  • @pawel7318 if your question are related please add the link to the other question in your own question. – Kiwy Apr 04 '14 at 11:57
  • 1
    @pawel7318 bash or not, you're using a very alien shell to me if it can't handle process backgrounding. And I agree with Kiwi - that information would serve you better if it were in your question. – mikeserv Apr 04 '14 at 13:24
0

You may want to use screen to run the command. You can then reattach to the session after the command completes.

Alternatively, just run the command in the background some_program with its arguments&. This will leave you the ability to rerun the command and get the status of the commmand once it is done.

BillThor
  • 8,965
  • I'm running it exactly from screen but putting it to background is not useful for me - I need sometimes to terminate the program, do something else and run it again. And the main goal is to do this fast. – pawel7318 Apr 04 '14 at 13:31
  • @pawel7318 You can kill the background program with the command kill % or kill %1. – BillThor Apr 04 '14 at 23:10