16

You hear it a lot eval is evil, whether it's in Shell/POSIX world, or in other langs like python etc...

But I'm wondering, is it actually useless? or is it there some arcane, non-documented, interesting or just useful use-cases for it?

Would prefer if the answer is sh/bash centric, but it's fine if it's about other Shells too.

PS: I'm well aware why eval is considered evil.

  • 5
    In my experience, lazy devs start concatenating strings and then call eval on them - I can't come up with a good reason off the top of my head why you would want to do this, though. – Panki Mar 23 '21 at 15:07
  • Yeah, I know; but that wouldn't be considered "useful use-cases" to me, unless it take less code to write or if it's somehow faster or have some other advantages @Panki – Nordine Lotfi Mar 23 '21 at 15:11
  • 1
    The often-linked-to BashFAQ page (some answers to https://unix.stackexchange.com/q/23111/315749 point there too) includes an "Examples of good use of eval" section. – fra-san Mar 23 '21 at 15:23
  • 2
  • 5
    eval is only evil if not called properly, but not much more so than sh, [, [[..]], sed, printf..., all those commands than can introduce ACE vulnerabilities when not used properly. – Stéphane Chazelas Mar 23 '21 at 16:48
  • 1
    I guess that echo "something$(somecommandoutput)" | bash could be considered evil too? (since you mentioned sh,sed etc) @StéphaneChazelas – Nordine Lotfi Mar 23 '21 at 16:49
  • 1
    Yes, any interpreter of any not-too-basic language including sh/bash/gsed/awk/perl... can interpret code that can run arbitrary command. eval is just another way to invoke your shell's interpreter. It's less obvious for commands like [, read or printf which could be seen as more evil for that reason. – Stéphane Chazelas Mar 23 '21 at 16:53
  • 2
    A search here will give you some examples. For instance a search of eval in my own posts returns 343 results ATM. – Stéphane Chazelas Mar 23 '21 at 16:58
  • 1
    The general form of "eval is evil" is because the program running in the interpreter can invoke the interpreter; therefore the program cannot be compiled. Mechanized processing cannot determine what the code does. – Joshua Mar 24 '21 at 19:06
  • 1
    I recently reasoned with my students (not demonstrated!) how you could program with a Lego EV3 robot running pybricks. You need a colour sensor, some black and white construction paper, and a non-colour background. Have a loop to check colour sensor for black or white in blocks of 8, then convert from bin to dec to ASCII and concatenate to a string. Use 00000000 to terminate input. Then eval the string. It takes 12 * 8 = 96 colour sensings to enter print(':)'). Oh, wait... you said "useful"... – Luke Sawczak Mar 25 '21 at 17:54
  • 1
    Well, I'd admit the way i used the term useful is a bit broad but, in your case, it could be considered that, given this is in the context of teaching...beside that, this probably wouldn't apply here since it use Python (and this post is more or less shell-centric) @LukeSawczak I appreciate that you talked about your experience though! it is an interesting use of eval. – Nordine Lotfi Mar 25 '21 at 17:59

10 Answers10

12

I know of two ... common ... use cases for eval:

  1. Argument processing with getopt:

    [T]his implementation can generate quoted output which must once again be interpreted by the shell (usually by using the eval command).

  2. Setting up an SSH agent:

    [T]he agent prints the needed shell commands (either sh(1) or csh(1) syntax can be generated) which can be evaluated in the calling shell, eg eval `ssh-agent -s` for Bourne-type shells such as sh(1) or ksh(1) and eval `ssh-agent -c` for csh(1) and derivatives.

Both uses might have alternatives, but I wouldn't bat an eyelid on seeing either of them.

muru
  • 72,889
  • Didn't know about the first use-case, although yeah, for the second one, one could just use ssh-agent -s (using "`") instead of eval...so the second one wouldn't really be useful if it already use grave accent , thus not needing eval afaik – Nordine Lotfi Mar 23 '21 at 15:27
  • 1
    That depends - eval should still result in the output being parsed like a command line, without it you're relying on word splitting (which might be disabled, or you might have a different IFS set for whatever reason, etc.). Anyway, as I said, both might have alternatives - but they're good enough uses that the respective manpages mention them. – muru Mar 23 '21 at 15:32
  • Didn't thought about the possibility of word splitting, guess that's useful in those cases yeah :) – Nordine Lotfi Mar 23 '21 at 16:02
  • 1
  • 6
    @NordineLotfi, ssh-agent prints stuff like SSH_AGENT_PID=32442; export SSH_AGENT_PID; echo Agent pid 32442;, and the variable assignments don't work without the eval. Try e.g. $(echo FOO=123). Though if it did output export FOO=123 BAR=456, that would work, but it couldn't then also have the shell run the echo (it can't print that itself, since output is caught by the command substitution). Also, without eval, it couldn't use quotes to have values with whitespace, not that it usually needs to do it. – ilkkachu Mar 23 '21 at 18:06
  • 2
    Note that you should use eval "$(ssh-agent -s)" if IFS can be an issue. Also, I think getopts exists as the eval-free alternative to getopt. – ilkkachu Mar 23 '21 at 18:12
  • I see, didn't thought of that either :D @ilkkachu – Nordine Lotfi Mar 23 '21 at 18:16
  • 1
    Long before ssh-agent there was tset, which was used similarly with eval. – Barmar Mar 24 '21 at 15:15
  • getopt(1) cannot deal with spaces in arguments and thus is obsolete since 35 years. So why do you mention examples with istric and dead software? – schily May 24 '21 at 05:55
9

To get the last argument in a POSIX shell without extensions like Bash & Co. slicing (i.e. ${@: -1}), one can use

eval "v=\${$#}" 

$# is not subject to nasty tricks, since it is internal to the shell and can only contain the number of arguments to the script/function.

I did not come up with that, it was Stéphane Chazelas in a comment. It is also mentioned in this answer to why and when should eval use be avoided?.

Quasímodo
  • 18,865
  • 4
  • 36
  • 73
6

With bash, because Brace Expansion happens before Shell Parameter Expansion:

$ char="F"
$ range=( {A.."$char"} )
$ declare -p range
declare -a range=([0]="{A..F}")
$ eval "range=( {A..$char} )"
$ declare -p range
declare -a range=([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]="F")
glenn jackman
  • 85,964
6

Some examples of my own "real-world" use-cases where I couldn't come up with better alternatives and eval just gets the job done neatly.


A "conditional-expansion" use-case. Here I want to use a redirection only if $rmsg_pfx has some value:

eval 'printf -- %s%s\\n "$rmsg_pfx" "$line" '"${rmsg_pfx:+>&2}"

I couldn't do it without eval because then the >&2 bit would expand as an argument for printf instead of as its redirection.

I could instead duplicate that line to account for $rmsg_pfx being empty or not, but that would be.. well.. code duplication.


Speaking of redirections, and as an "indirection" use-case, I like relying on the {varname}>&... redirection syntax, which I emulate POSIXly like in below:

# equivalent of bash/ksh `exec {rses_fd0}>&- {rses_fd1}<&-` redirection syntax
eval "exec $rses_fd0>&- $rses_fd1<&-"

The above is for closing fds, and likewise I'm doing an analogous indirection for emulating the opening of fds. Obviously $rses_fd0 and $rses_fd1 are script's internal variables, completely under its control from start to end.


Sometimes I had to use eval to simply "protect" snippets of shell code meant to target specific shells while not disrupting others.

For instance the piece of code below is from a script which is to be portable (POSIXly) while also embedding a few shell-specific optimizations:

sochars='][ (){}:,!'"'\\"
# NOTE: wrapped in an eval to protect it from dash which croaks over the regex
eval 'o=; while [[ "$s" =~ ([^$sochars]*)([$sochars])(.*) ]]; do
    ...
done'

dash simply chokes on unknown (but direct) syntax at the lexical level, even when such syntax never gets in the actual code-path.


Another "protection" use-case, in a different sense. Sometimes I just can't be bothered of having to invent "unlikely" names for save&restore purposes. Such as in the case below where I just want $r's value to be preserved:

# wrapped in eval just to make sure that $r is not overwritten by (the call chain of) coolf
eval '
    coolf "$tmp" || return "$lerrno"'"
    return $r
"

I actually use often the trick above to preserve exit statuses from loop-suites while also doing cleanup operations, as in:

    done <&3
    eval "unset ret vals; exec 3<&-; return $?"
}

Or in cases similar to the above, as a "deferred execution":

    done
    # return boolean set by loop while also unsetting it
    eval "unset ok; ${ok:-false}"
}

Note that one implied intention of both snippets above is to not leave "artifacts" from the function execution, especially when the function is meant to be run interactively. For the latter case I could instead do:

[ "${ok:-false}" = false ] && { unset ok; return 1; } || { unset ok; return 0; }

but looks quite rough to me.


Finally I had some occasional use-cases where I wanted/needed to amend, or extend, a function just slightly and on a call basis, perhaps for small behavioral changes or to support some hooking from the caller. Like a callback but in an "inline" fashion, which seemed much less cumbersome particularly when the hook snippet needs access to the function's own $@ arguments. Naturally such snippets, fed thru variables to be subsequently eval-ed by the function, are either entirely static/handmade themselves or heavily pre-controlled/sanitized.

LL3
  • 5,418
  • 2
    +1; Half of those make me wish all the shells had local variables with static scoping. The first one reminds me of this: https://unix.stackexchange.com/questions/38310/conditional-pipeline BTW, I can't get Dash to error with that invalid expansion, dash -c 'if false; then unset "${!trap_@}"; fi' runs fine. Doesn't mean it wouldn't burn sooner with some other syntax extension. – ilkkachu Mar 24 '21 at 13:51
  • 2
    @ilkkachu Thank you for pointing that out. Yes, the syntax element has to be lexical or else the error is not triggered. Admittedly I've become accustomed to wrapping shell-specific syntaxes in eval as a safety default even when it wouldn't be required. I've reported a better example now, valid as of dash v0.5.11.3 – LL3 Mar 24 '21 at 14:45
4

One of the real-life usages of eval that I have come across is used in fluxbox.startfluxbox.dbus.diff.gz on Slackware. It looks like this:

# Start DBUS session bus:
if [ -z "$DBUS_SESSION_BUS_ADDRESS" ]; then
   eval $(dbus-launch --sh-syntax --exit-with-session)
fi

And even though this usage of eval hasn't been tested by millions (Slackware is not very common) it does the job. Still, I would do my best to avoid eval in my shell scripts. If I had a feeling I need it, for example to implement arrays or perform variables indirection I'd switch to Bash and if I still felt I need it I'd rethink the script design or switch to a completely different language.

3

I've only ever used it years ago to make a goto function.

When I moved from Windows to Linux on my desktop, I had a lot of pre-existing .BAT and .CMD files to convert and I wasn't going to rewrite the logic for them, so I found a way to do a goto in bash that works because the goto function runs sed on itself to strip out any parts of the script that shouldn’t run, and then evals it all.

The below source is slightly modified from the original to make it more robust:

#!/bin/bash

BAT / CMD goto function

function goto { label=$1 cmd=$(sed -n "/^:[[:blank:]][[:blank:]]*${label}/{:a;n;p;ba};" $0 | grep -v ':$') eval "$cmd" exit }

apt update

Just for the heck of it: how to create a variable where to jump to:

start=${1:-"start"} goto "$start"

: start goto_msg="Starting..." echo $goto_msg

Just jump to the label:

goto "continue"

: skipped goto_msg="This is skipped!" echo "$goto_msg"

: continue goto_msg="Ended..." echo "$goto_msg"

following doesn't jump to apt update whereas original does

goto update

and I do not feel guilty at all as Linus Torvalds famously said:

From: Linus Torvalds
Subject: Re: any chance of 2.6.0-test*?
Date: Sun, 12 Jan 2003 11:38:35 -0800 (PST)

I think goto's are fine, and they are often more readable than large amounts of indentation. That's especially true if the code flow isn't actually naturally indented (in this case it is, so I don't think using goto is in any way clearer than not, but in general goto's can be quite good for readability).

Of course, in stupid languages like Pascal, where labels cannot be descriptive, goto's can be bad. But that's not the fault of the goto, that's the braindamage of the language designer.

Original source for the code (modified to make it less error prone)
The source for the quote

It's been a while since I've needed a goto though, as most of these old scripts have been revamped completely by now.

Fabby
  • 5,384
  • 2
    This is quirky...I like! Always wanted to do this kind of batch <-> bash tinkering, but never did...guess i can change that now – Nordine Lotfi Mar 24 '21 at 00:04
  • 3
    goto has its uses in C code. (Some of those can be avoided by splitting code to smaller functions and using return. Some can't.) But you know, C is a language where goto actually exists in the language definition and is supported by the compiler. :) But emulating it by runtime source code tricks is... well, a bit insane... – ilkkachu Mar 24 '21 at 08:55
  • 1
    @ilkkachu That's why I put the disclaimers at the end. I haven't used any goto in bash in the last 8 years, but if you're migrating from Window$ to Linux and you do have a lot of scripts to convert in a short time, it can be useful. In the meantime new scripts get developed in bash, so it's not an issue any more.. – Fabby Mar 25 '21 at 12:12
  • 2
    Why grep -v ':$'? – Nick Matteo Mar 25 '21 at 14:44
  • 2
    To answer my own comment after following the link in the answer: it was because an older version used "label:", which needed to be stripped out since it would attempt to execute the program label:, but the new version with initial : can be left in (since : is conveniently the null command), so the grep is superfluous. – Nick Matteo Mar 25 '21 at 18:13
  • @NickMatteo I haven't had a look at the code in a while. I'll review it this week-end and update. – Fabby Mar 26 '21 at 10:44
3

Apart from what others have said, eval can also be used to more easily parse the output of a command, given that the command can output an evalable string. As an example, instead of trying to parse the output of

xdotool getwindowfocus getwindowgeometry

you can pass xdotool the --shell option and just eval the output.

eval "$( xdotool getwindowfocus getwindowgeometry --shell )"

In this case xdotool will output something similar to

WINDOW=46137350
X=1290
Y=559
WIDTH=1258
HEIGHT=509
SCREEN=0

and evaling this output will declare these variables in your shell.

XPhyro
  • 146
  • 1
    stty is another command that produces shell-executable output - useful when your script needs to restore terminal state as it found it. – Toby Speight Mar 24 '21 at 16:54
  • 3
    @TobySpeight, hmm, does it? I've only seen stty -g which produces a string readable by stty itself: save_state=$(stty -g); ...; stty "$save_state" – ilkkachu Mar 24 '21 at 17:55
  • 2
    @ikkachu - you're right; I must have been mixing up with something else. I think it was tty-related, and probably in the 1990s when I was mostly using HP-UX - possibly some ttysize utility to set LINES and COLUMNS within the shell. – Toby Speight Mar 25 '21 at 07:47
  • 3
    @TobySpeight, most likely it was resize (shipped with xterm) – Stéphane Chazelas Mar 25 '21 at 17:45
3

One example where I have seen eval being used is in Environment Modules.

There a bash function is created as a wrapper around the "real" program:

module() { eval `/usr/bin/modulecmd bash $*`; }

Then, when I want to load a module, say gcc/7.2.0, I type

module load gcc/7.2.0

and usr/bin/modulecmd reads the environment and returns a new environment with extended paths:

CPP_INCLUDE_PATH=/nfs/modules/gcc/7.2.0/include:... ;export CPP_INCLUDE_PATH;
C_INCLUDE_PATH=/nfs/modules/gcc/7.2.0/include:... ;export C_INCLUDE_PATH;
LD_LIBRARY_PATH=/nfs/modules/gcc/7.2.0/lib:... ;export LD_LIBRARY_PATH;
LIBRARY_PATH=/nfs/modules/gcc/7.2.0/lib:... ;export LIBRARY_PATH;
...

These are then evaled by the module function so that my environment changes.

Also pyenv uses eval in a similar way; a program produces code for modifying the environment, and this is evaled by a bash function so that the environment changes.

md2perpe
  • 151
  • 2
    That eval \/usr/bin/modulecmd bash $`` doesn't make any sense whatsoever. `$is the positional parameters joined (with variations among shells) and subject to split+glob. The`...`` (using the ancient form of cmdsubst) is also subject to split+glob. Maybe you meant eval "$(/usr/bin/modulecmd bash "$@")" – Stéphane Chazelas Mar 25 '21 at 17:48
  • 1
    @StéphaneChazelas. The eval lines looks exactly like that. If I run module load gcc/7.2.0 then /usr/bin/modulecmd bash load gcc/7.2.0 will be called and its output will be evaled. – md2perpe Mar 25 '21 at 18:16
  • 1
    @StéphaneChazelas. The ... just represents that I shortened the full paths, which are of no interest for readers here. – md2perpe Mar 25 '21 at 18:18
  • 1
    I meant \...`` is the ancient form of command substitution. $(...) is the modern one, but the main issue here is is that it's not quoted so subject to split+glob. Using $* and $@ unquoted also never make sense. – Stéphane Chazelas Mar 25 '21 at 18:45
  • 2
    @StéphaneChazelas. I agree that $(...) and "$@" is better, but I didn't write the code. You can find it here. On the other hand, I don't think there is any command of module where arguments might contain spaces. – md2perpe Mar 25 '21 at 18:55
  • 1
    split+glob is about characters in $IFS, and globbing characters, not necessarily spaces (though the space character happens to be in the default value of $IFS). – Stéphane Chazelas Mar 25 '21 at 19:18
  • 1
    @StéphaneChazelas: This has been changed to "$@" few years ago, code that is pointed here is from the compatibility version of environment modules. ... instead of $(...) does not change a thing in the situation it is used as far as I know. What is important to know is that the syntax used should work on any sh-shell from any Unix OS. – Xavier Delaruelle Mar 26 '21 at 06:35
1

If you were to make a command that takes shell code from the user like e.g. watch then evaling such code seems acceptable.

JoL
  • 4,735
  • 1
    watch (at least some implementations) concatenates its arguments with spaces and asks sh to interpret it. eval does the same but with the current shell (which may or may not understand sh syntax). – Stéphane Chazelas Mar 25 '21 at 17:51
  • 1
    @StéphaneChazelas Yes, that's kind of what I meant by acceptable. I know that watch in particular doesn't use eval, but it's a relatively minor technical difference if it did. If it did use eval instead of sh -c, we wouldn't suddenly worry about shell code injection because if that was a concern it would also apply to sh -c. Both are the same that way, so I'm treating them the same for this answer, the point of which is that shell code injection is not a concern when you're documented to take code as opposed to other data that's then not validated to not be code. – JoL Mar 25 '21 at 22:45
1

I end up with 'eval' is a lot of my install scripts and build scripts when I'm doing checks with the file system. This includes things like as bad umasks, mkdir doesn't conform to POSIX, or that there's some race condition. I end up making trees and checking for these conditions as I go along. Here's a chunk out of one of my scripts:

if test -n "$prefixes"; then
    # Don't fail if two instances are running concurrently.
    (umask $mkdir_umask &&
     eval "\$doit_exec \$mkdirprog $prefixes") ||
      test -d "$dstdir" || exit 1
    mkdir_used=true
 fi

There's likely better ways to handle this, but people who are better than I at scripts haven't scolded me for it yet.

b degnan
  • 111