50

So I started using zsh. I like it all right. It seems very cool and slick, and the fact that the current working directory and actual command line are on different lines is nice, but at the same time, I'm noticing that zsh can be a bit slower than bash, especially when printing text to the screen.

The thing I liked best was the fact that zsh was 'backward compatible' with all of the functions I defined in my .bashrc.

One gripe though. The functions all work perfectly, but I can't figure out how the exporting system works.

I had some of those .bashrc functions exported so that I could use them elsewhere, such as in scripts and external programs, with export -f.

In zsh, exporting doesn't seem to even be talked about. Is it autoloading? Are those two things the same? I'm having a seriously hard time figuring that out.

ixtmixilix
  • 13,230
  • 4
    This is a very old question, but I want to say that "the current working directory and actual command line are on different lines" has nothing at all to do with zsh. It depends on how you set up your prompt, that's all. – 4ae1e1 May 02 '14 at 17:21

3 Answers3

25

Environment variables containing functions are a bash hack. Zsh doesn't have anything similar. You can do something similar with a few lines of code. Environment variables contain strings; older versions of bash, before Shellshock was discovered, stored the function's code in a variable whose name is that of the function and whose value is () { followed by the function's code followed by }. You can use the following code to import variables with this encoding, and attempt to run them with bash-like settings. Note that zsh cannot emulate all bash features, all you can do is get a bit closer (e.g. to make $foo split the value and expand wildcards, and make arrays 0-based).

bash_function_preamble='
    emulate -LR ksh
'
for name in ${(k)parameters}; do
  [[ "-$parameters[name]-" = *-export-* ]] || continue
  [[ ${(P)name} = '() {'*'}' ]] || continue
  ((! $+builtins[$name])) || continue
  functions[$name]=$bash_function_preamble${${${(P)name}#"() {"}%"}"}
done

(As Stéphane Chazelas, the original discoverer of Shellshock, noted, an earlier version of this answer could execute arbitrary code at this point if the function definition was malformed. This version doesn't, but of course as soon as you execute any command, it could be a function imported from the environment.)

Post-Shellshock versions of bash encode functions in the environment using invalid variable names (e.g. BASH_FUNC_myfunc%%). This makes them harder to parse reliably as zsh doesn't provide an interface to extract such variable names from the environment.

I don't recommend doing this. Relying on exported functions in scripts is a bad idea: it creates an invisible dependency in your script. If you ever run your script in an environment that doesn't have your function (on another machine, in a cron job, after changing your shell initialization files, …), your script won't work anymore. Instead, store all your functions in one or more separate files (something like ~/lib/shell/foo.sh) and start your scripts by importing the functions that it uses (. ~/lib/shell/foo.sh). This way, if you modify foo.sh, you can easily search which scripts are relying on it. If you copy a script, you can easily find out which auxiliary files it needs.

Zsh (and ksh before it) makes this more convenient by providing a way to automatically load functions in scripts where they are used. The constraint is that you can only put one function per file. Declare the function as autoloaded, and put the function definition in a file whose name is the name of the function. Put this file in a directory listed in $fpath (which you may configure through the FPATH environment variable). In your script, declare autoloaded functions with autoload -U foo.

Furthermore zsh can compile scripts, to save parsing time. Call zcompile to compile a script. This creates a file with the .zwc extension. If this file is present then autoload will load the compiled file instead of the source code. You can use the zrecompile function to (re)compile all the function definitions in a directory.

  • 4
    Funny how your code has the same shellshock vulnerability bash had (doesn't verify that the content of the variable is only a function definition and processes any variable name like HTTP_HOST or LC_X). Good answer otherwise. – Stéphane Chazelas Jun 03 '17 at 21:30
  • @StéphaneChazelas If you're going to run commands with functions imported from the environment, you've pretty much lost. But I've updated the import code to not execute arbitrary code. It isn't very useful though, since post-shellshock bash doesn't encode its exported functions in the same way. – Gilles 'SO- stop being evil' Jun 03 '17 at 22:03
  • 1
    You've now fixed the equivalent of CVE-2014-6271, but are still probably exposed to many vulnerabilities of the type of CVE-2014-6277/6278... as you're still exposing the zsh parser to code in any variable including some that are potentially under control of attackers in some contexts (as the code in zsh -c 'functions[f]=$VAR' is parsed even if the f function is never called). The solution is to consider only variables whose name follow a reserved template like those $BASH_FUNC_x%%, but as you say, zsh has no API to list or retrieve those. You'd need to call perl for instance. – Stéphane Chazelas Jun 04 '17 at 09:38
  • https://superuser.com/questions/1515680/export-f-equivalent-in-zsh/1515756#1515756 – 8c6b5df0d16ade6c Aug 11 '22 at 21:05
15

If you put your function declaration in .zshenv, your function will be usable from a script without any effort.

rools
  • 486
  • Why did you downvote my answer? Please explain. – rools Dec 01 '17 at 10:24
  • Still waiting for an answer and still working! – rools Dec 20 '17 at 23:16
  • I just discovered this answer and it is an ideal solution. – AFH Jun 04 '18 at 14:23
  • 1
    I didn't downvote it. And TBH the OP was asking about exporting stuff from .bashrc, which is kind of a bad idea, better to put it in a script so you don't end up with a huge environment. But your solution is just a variant of that same bad idea, put all your scripts in .zshenv and it slows down every invocation of zsh by parsing a lot of code that never gets used. Further it is not the same as exporting a function, just like an exported variable, an exported function is only available to child processes. Whereas the stuff you put in .zshenv is available to every zsh. – Metamorphic Aug 29 '18 at 05:27
  • 1
    Finally if you rely on personal code you put in your .zshenv, then all your scripts will be totally non-portable. Normally scripts could depend on each other, which is fine, you distribute them together. But if they depend on having special functions in .zshenv, no one will want to use them, or they would have to be invoked with a special ZDOTDIR, preventing your own .zshenv from being executed. It would be a pain. – Metamorphic Aug 29 '18 at 05:34
1

Everywhere I've read about this suggests that automatically importing functions is a bad idea, so I decided to just extract my scripts into a separate file that can be easily sourced in a subshell.

I called the shared file .zshared.

# ~/.zshared
function say-hello() {
  echo "Hello, $1\!"
}

I sourced it in my .zshrc.

# ~/.zshrc
source ~/.zshared

In any scripts that need access to those functions, I just source .zshared.

#!/usr/bin/env zsh
source ~/.zshared

say-hello "David" # Hello, David!

It requires a little work initially to get your scripts updated, but you still get to define your functions in one place without much overhead.