A shell is interactive when it's not interpreting a (possibly inline with -c
) script and its stdin is a terminal (but see below for POSIX shells)
In that case, what we want is for the prompt (and the echo
of what you type for shells that have their own line editor) to be displayed on the same terminal. The problem is stdin is not guaranteed to be open in read+write mode, so outputting the prompt on stdin would not be reliable. Also midway through the interactive session, one may want to do exec < other-file
which would also break the prompt display.
A sensible thing to do and what zsh
does would be to determine what terminal that is (using ttyname()
) and reopen it for read+write (on a dedicated, separate fd above 10) for user interaction. Using a file descriptor open on /dev/tty
(the controlling terminal, which in the great majority of cases will be refering to the same terminal as the one on stdin if stdin is a terminal) would make sense to me but it doesn't seem any shell does it (possibly to take care of the cases where there's no controlling terminal which can happen in some cases like recovery shells on the console).
However the POSIX specification does say that a shell is only interactive if on top the requirements described above, stderr is also a terminal, and POSIX requires the prompts to be written on stderr.
bash
aims at POSIX conformance so has to follow those requirements.
I suppose the reason is historical. The POSIX sh
specification is based on ksh88
and that's how ksh88
behaved and the Bourne shell before it. (even though ttyname()
already existed in Unix V7 where the Bourne shell was first released).
It's common for user interaction for terminal applications (like the prompts of rm
/mv
/find -ok
...) to be on stdin+stderr. stderr would be open in write mode where open, and in terminal sessions would point to the terminal. stdout is for the command's normal output, it's important to keep it separate from user interaction messages so one can use redirection to store or post-process the normal output. Having it on stderr (a known in advance specific fd) instead of an internal fd open on /dev/tty
makes it easier to remove those messages (by running the command with 2> /dev/null
for instance) or use the command non-interactively.
In the case of a shell though, it is not particularly useful. Running sh 2> /dev/null
makes the shell non-interactive (which means prompts are not displayed but has many other side effects). That means one can disable the prompts with exec 2> /dev/null
, but that would have the side effect of also discarding all commands errors unless you run each command with cmd 2> something
. Emptying PS1, PS2, PS3, PS4 would be much better.
That allows user input to come from one terminal and output to go to a different terminal, but I don't see why anyone would want to do that.
A possible reason is that it would be more foolproof:
/dev/tty
as seen above may not work in corner cases where there's no controlling terminal,
ttyname()
may not work when /dev
is not properly populated or when using chroot
s.
Note that it's worse in some other shells. csh
, tcsh
, rc
, es
, akanga
, fish
display the prompt and the echo of what you type on stdout (though for fish
not the prompt if stdout is not a terminal, and csh
which doesn't have a line-editor doesn't output any echo
(that is taken care of by the tty line discipline of the terminal device)).
exec > out.txt
, and then enter any commands you like with their output redirected, but still seeing the errors and the prompt. Maybe there's some remotely sensible use-case for this. – ilkkachu Jul 21 '17 at 20:55