15

I have a certain bash script, which wants to preserve original /dev/stdout location before replacing 1st file descriptor with other location.

So, naturally, I wrote something like

old_stdout=$(readlink -f /dev/stdout)

And it didn't work. Very quickly I understand what the problem was:

test@ubuntu:~$ echo $(readlink -f /dev/stdout)
/proc/5175/fd/pipe:[31764]
test@ubuntu:~$ readlink -f /dev/stdout
/dev/pts/18

Obvioulsly, $() runs in a subshell, which is piped to the parent shell.

So the question is: is there a reliable (scoped to portability between Linux distributions) way to save /dev/stdout location as a string in a bash script?

Rahul
  • 13,589
  • This sounds a little bit like an XY problem. What is the underlying issue? – Kusalananda Jul 29 '16 at 17:09
  • Underlying issue is a certain installation script which runs in two modes - silent, in which it logs all output to file and verbose, in which it not only logs to file, but also prints everything to terminal. But in both modes the script wants to interact with user, i.e. print to terminal and read user response. So I thought that saving /dev/stdout would solve the problem with printing messages in silent mode. Alternative is redirect every other action that produces output, and there's quite a number of them. Roughly 100 times more than user interaction messages. – alexey.e.egorov Jul 30 '16 at 12:47
  • The standard way of interacting with the user is to print to stderr. This is, for example, why prompts are going to stderr by default. – Kusalananda Jul 30 '16 at 12:49
  • Unfortunately, the stderr must also be reirected and saved, since the script calls a number of external programs, and all possible error messages are to be collected and logged. – alexey.e.egorov Jul 30 '16 at 12:58

3 Answers3

16

To save a file descriptor, you duplicate it on another fd. Saving a path to the corresponding file is not enough, you'd need to save the opening mode, the opening flags, the current position within the file and so on. And of course, for anonymous pipes, or sockets, that wouldn't work as those have no path. What you want to save is the open file description that the fd refers to, and duplicating an fd is actually returning a new fd to the same open file description.

To duplicate a file descriptor onto another, with Bourne-like shell, the syntax is:

exec 3>&1

Above, fd 1 is duplicated onto fd 3.

Whatever fd 3 was already open to before would be closed, but note that fds 3 to 9 (usually more, up to 99 with yash) are reserved for that purpose (and have no special meaning contrary to 0, 1, or 2), the shell knows not to use them for its own internal business. The only reason fd 3 would have been open beforehand is because you did it in the script1, or it was leaked by the caller.

Then, you can change stdout to something else:

exec > /dev/null

And later, to restore stdout:

exec >&3 3>&-

(3>&- being to close the file descriptor which we no longer need).

Now, the problem with that is that except in ksh, every command you run after that exec 3>&1 will inherit that fd 3. That's a fd leak. Generally not a big deal, but that can cause problem.

ksh sets the close-on-exec flag on those fds (for fds over 2), but not other shells and other shells don't have any way to set that flag manually.

The work around for other shell is to close the fd 3 for each and every command, like:

exec 3>&-

exec > file.log

ls 3>&-
uname 3>&-

exec >&3 3>&-

Cumbersome. Here, the best way would be to not use exec at all, but redirect command groups:

{
  ls
  uname
} > file.log

There, it's the shell that takes care to save stdout and restore it afterwards (and it does do it internally by duplicating it on a fd (above 9, above 99 for yash) with the close-on-exec flag set).

Note 1

Now, the management of those fds 3 to 9 can be cumbersome and problematic if you use them extensively or in functions, especially if your script uses some third party code that may in turn use those fds.

Some shells (zsh, bash, ksh93, all added the feature (suggested by Oliver Kiddle of zsh) around the same time in 2005 after it was discussed among their developers) have an alternative syntax to assign the first free fd above 10 instead which helps in this case:

myfunction() {
  local fd
  exec {fd}>&1
  # stdout was duplicated onto a new fd above 10, whose actual value
  # is stored in the fd variable
  ...
  # it should even be safe to re-enter the function here
  ...
  exec >&"$fd" {fd}>&-
}
  • Also, your code is wrong in a sense that fd 3 might be already taken, as it happens when a scrip run from a rc.local service, e.g. So you really shoud have used something like exec {FD}>&1 or something. But this is supported only in bash 4, which is really sad. So this isn't really portable. – alexey.e.egorov Jul 30 '16 at 01:34
  • @alexey.e.egorov, see edit. – Stéphane Chazelas Jul 30 '16 at 06:54
  • Bash 3.* doesnt support this feature, and this version is used in Centos 5, which is still supported and still used. And finding free descriptor and then eval "exec $i>&1" is a thing I would like to avoid, due to it cumbersomeness. Can I really rely on that fds above 9 would be free then? – alexey.e.egorov Jul 30 '16 at 12:52
  • 1
    @alexey.e.egorov, no, you're looking at it backward. fds 3 to 9 are free to use (and it's up to you to manage them as you want) and are intended for that purpose. fds above 9 may be used by the shell internally and closing them could have nasty consequences. Most shells won't let you use them. bash will let you shoot yourself in the foot. – Stéphane Chazelas Jul 30 '16 at 13:25
  • Oh, understood. Hmmm... It seems quite strange. I only encountered the opposite case... Here, for example: http://unix.stackexchange.com/questions/295883. As you can see (in edit 2), rc.local leaves a bunch of opened fds with lower numbers to the script. Is it a bug? In terms of requirements violation. – alexey.e.egorov Jul 30 '16 at 14:06
  • 3
    @alexey.e.egorov, if upon start you script has some fds in (3..9) open, that's because your caller forgot to close them or set the close-on-exec flag on them. That's what I call a fd leak. Now, maybe the caller intended to pass those fds to you, so you can read and/or write data from/to them, but then you'd know about it. If you don't know about them, then you don't care, then you can close them freely (note that it just closes your script's process fd, not that of your caller). – Stéphane Chazelas Jul 30 '16 at 14:40
  • could you tell me where I can read more about such fd guarantees and caller responsibilities. I performed a search, but failed to find Linux-specific information. Thanks! – alexey.e.egorov Aug 01 '16 at 12:11
  • Any reason you didn't do exec >&"$fd- at the end to move the FD? – Tom Hale Jul 01 '18 at 09:53
3

As you can see, bash scripting is not like a regular programming language where you can assign file descriptors.

The simplest solution is to use a sub-shell to run what you want redirected so that processing can be reverted to the top-shell which has its standard I/O intact.

An alternate solution would be to use tty to identify the TTY device and control the I/O in your script. For example:

dev=$(tty)

and then you can..

echo message > $dev
3

$$ would get you the current process PID, in case of interactive shell or script the relevant shell PID.

So you can use:

readlink -f /proc/$$/fd/1

Example:

% readlink -f /proc/$$/fd/1
/dev/pts/33

% var=$(readlink -f /proc/$$/fd/1)

% echo $var                       
/dev/pts/33
heemayl
  • 56,300
  • 1
    While it's functional, relying on a specific /proc structure causes portability issues, as does using /dev/stdout as mentioned in the question. – Julie Pelletier Jul 29 '16 at 17:07
  • 2
    @JuliePelletier Relying on a specif /proc structure? It would work on any Linux that has procfs.. – heemayl Jul 29 '16 at 17:08
  • 1
    Right, so we can generalize for Linux as procfs is almost always present, but we often see portability questions and a good development methodology includes considering portability to other systems. bash can run on a multitude of operating systems. – Julie Pelletier Jul 29 '16 at 17:16
  • @JuliePelletier The OP clearly states that the solution is to be "scoped to portability between Linux distributions". – clapas Oct 27 '21 at 10:17