14

I prefer to launch GUI applications from a terminal window rather than by using a graphical desktop. A frequent annoyance is that often the developers haven't anticipated this type of use, so the app prints lots of useless, cryptic, or uninformative messages to stdout or stderr. Further clutter on the terminal occurs because running the program in the background, with an &, generates reports of the creation and termination of the job.

What is a workaround for these problems that will accept command line arguments and handle autocompletion?

Related: https://stackoverflow.com/questions/7131670/make-bash-alias-that-takes-parameter

3 Answers3

14

Redirecting the standard error immediately to /dev/null is a bad idea as it will hide early error messages, and failures may be hard to diagnostic. I suggest something like the following start-app zsh script:

#!/usr/bin/env zsh
coproc "$@" 2>&1
quit=$(($(date +%s)+5))
nlines=0
while [[ $((nlines++)) -lt 10 ]] && read -p -t 5 line
do
  [[ $(date +%s) -ge $quit ]] && break
  printf "[%s] %s\n" "$(date +%T)" "$line"
done &

Just run it with: start-app your_command argument ...

This script will output at most 10 lines of messages and for at most 5 seconds. Note however that if the application crashes immediately (e.g. due to a segmentation fault), you won't see any error message. Of course, you can modify this script in various ways to do what you want...

Note: To make completions work with start-app in zsh, it suffices to do:

compdef _precommand start-app

and in bash:

complete -F _command start-app

(copied from the one for exec and time in /usr/share/bash-completion/bash_completion).

vinc17
  • 12,174
  • 6
    Cute idea, +1. But I disagree that it's a bad idea in general to redirect stderr from a GUI app. 99% of all users will invoke it from a graphical desktop, so they would never see anything that went to stderr. The software is designed to report errors through the GUI. What you see on stdout and stderr is typically debugging messages that the developers didn't bother to take out because they didn't think anyone would see them. –  Aug 18 '14 at 00:56
  • @BenCrowell I agree that GUI apps should report errors via the GUI, but in some cases, it may happen that the application fails before it starts the GUI. This occurs in particular when the application is invoked via a wrapper script that parses the arguments (in general, this is not a problem for users who start the app from the desktop since in this case, the arguments should be correct). – vinc17 Aug 18 '14 at 01:08
  • @BenCrowell I also think of the case where $DISPLAY is not set (e.g. if the user forgot a -X for ssh) or an X authorization problem like here: http://unix.stackexchange.com/questions/108679/x-client-forwarded-over-ssh-cannot-open-display-localhost11-0 – vinc17 Aug 18 '14 at 01:18
  • @mikeserv I think that various users may be interested in this question (not just the OP), and they may use either bash or zsh. I've just added a note for completions in zsh and bash. As you can see, this is simple. – vinc17 Aug 18 '14 at 14:11
  • @mikeserv Note that there's a test on the date. Simpler and more portable, but less flexible if one wants to add features: "$@" 2>&1 | { quit=$(($(date +%s)+5)); while read line && [ $(date +%s) -lt $quit ]; do printf "[%s] %s\n" "$(date +%T)" "$line"; done; } | head -n 10 & (the most important point was the idea, not the actual implementation). – vinc17 Aug 18 '14 at 14:54
  • @mikeserv "$@" works with bash, dash, ksh, and zsh, at least. And $DISPLAY may be set but non-working. And some X apps report errors to stderr, e.g. xterm. – vinc17 Aug 18 '14 at 15:49
  • @mikeserv Being able to see an initial error is important. If I type evince --presntation, I want to see the error instead of waiting and wondering why nothing shows up. I don't see the point in hiding such errors. – vinc17 Aug 18 '14 at 16:01
  • But my point is that they're hidden by default - it's the intended behavior. At least, it is kind of. Hang on... I'm updating my answer to show it... – mikeserv Aug 18 '14 at 16:02
  • @mikeserv That's true that error messages are sometimes hidden depending on how an application is started. But this is not a reason to hide them when one can avoid this. FYI, such a problem has just happened to me, showing that hiding all error messages is bad: after a recent Debian upgrade, some process (started in a similar way in background from a terminal) was no longer working. The cause is an immediate segfault, so that it doesn't have the time to signal the error under any form. I could find out only when starting the process in foreground, where I got a report from the shell. – vinc17 Aug 20 '14 at 15:35
  • Why didn't you look at the console - was it an X app? – mikeserv Aug 23 '14 at 04:30
  • @mikeserv No messages in the console (for segfaults, there are never messages in it). It wasn't an X app, but it doesn't matter: an X app may crash very early in its init phase (like here), or at least before X operations are involved. – vinc17 Aug 23 '14 at 08:12
  • @mikeserv Yes, the early errors are reported to the tty... unless they are hidden in some way. For instance, a crash can only be reported by the shell that started the app. But this is rather complex to support that here without any drawback. Note: I've just done 2 corrections in my script to avoid leaving zsh processes. – vinc17 Aug 23 '14 at 10:21
  • Youre right about that. Like i said in my own my own answer - i dont even know why i cant write to XDG_VTNR anymore. Certainly it is not a bad thing to take a firm hand in managing your environment. And the least you can do is learn, right? Maybe sometimes i just argue for the sake of arguing. My apologies, and my thanks for humoring me. About the edit - im happy you did, but i cant upvote it twice, you know. – mikeserv Aug 23 '14 at 10:24
5

This answer is for bash. As an example, here's what I do in my .bashrc to make a convenience command ev to start up the PDF viewer Evince.

ev() { (evince "$1" 1>/dev/null 2>/dev/null &) }
complete -f -o default -X '!*.pdf' ev

The first line defines a function ev. The name of a function will be recognized when you use it on the command line like this:

ev foo.pdf

(This is a different mechanism than aliases, and has lower priority.) Evince's output to stdin and stdout is sent to the bitbucket (/dev/null). The ampersand puts the job in the background. Surrounding the command in parentheses causes it to be run in a subshell so that it doesn't print messages about the creation of the background job or its completion.

The second line from my .bashrc uses bash's complete function to tell bash that the argument of the ev command is expected to be a file with the extension pdf. This means that if I also have files foo.tex, foo.aux, etc., sitting in my directory, I can type ev foo and hit the tab key, and bash will know to complete the filename as foo.pdf.

4

Another possibility is to use command to demote exec from a special builtin to a plain old builtin like:

alias shh='command exec >/dev/null 2>&1'

So now you can do:

(shh; call some process &)

I've just noticed that command does not work in zsh (as it seems to do in most other shells), but where it doesn't work you can do instead:

alias shh='eval "exec >/dev/null 2>&1"'

...which should work everywhere.

In fact, you might even do:

alias shh='command exec >"${O:-/dev/null}" 2>&1'

So you could do:

O=./logfile; (shh;echo can anyone hear &)
O=; (shh; echo this\? &)
cat ./logfile

OUTPUT

can anyone hear

Following up a comment discussion with @vinc17, it's worth noting that almost all of a GUI app's console output is generally intended for X's tty - its console. When you run an X app from an X .desktop file the output it generates is routed to X's virtual terminal - which is whatever tty it was from which you launched X in the first place. I can address this tty number with $XDG_VTNR.

Strangely though - and maybe because I just started using startx - I can no longer seem to just write to /dev/tty$XDG_VTNR. This may also (as I think is more likely) have something to do with the very recent and drastic change implemented with Xorg v1.16 that allows it to run under a systemd user session rather than requiring root privileges.

Still, I can do:

alias gui='command exec >/dev/tty$((1+$XDG_VTNR)) 2>&1'

(gui; some x app &)

Now all of some x app's console output is being routed to /dev/tty$((1+$XDG_VTNR)) rather than my xterm's pty. I can get the last page of this at any time like:

fmt </dev/vcs$((1+$XDG_VTNR))

It is probably best practice to dedicate some virtual terminal to log output anyway. /dev/console is generally already reserved for this, though you may prefer not to do the chown that is likely required for you to blithely write to that. You may have some function that enables you do a printk - which is basically printing to /dev/console - and so could use it that way I suppose.

Another way to do this would be to dedicate a pty to such purposes. You could, for instance, keep an xterm window open, save the output of tty when run from there in an environment variable, and use that value as the destination for gui's output. In that way all of the logs would be routed to a separate log window, that you could then scroll through if you liked.

I once wrote an answer about how a similar thing could be done with bash history, if you're interested.

mikeserv
  • 58,310
  • 1
    I suggest that you remove your remark on the output of echo $? as it adds useless information and it is based on a bug in bash, which I've just reported here: http://lists.gnu.org/archive/html/bug-bash/2014-08/msg00081.html and in the Debian BTS: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=758969 – vinc17 Aug 23 '14 at 11:27
  • @vinc17 yup - i guess i must have done that in bash which is weird - because i never use that shell. guess i just for this answer. – mikeserv Aug 23 '14 at 11:42