2

I have several different algorithms, that I need to prototype.

So I make prototype programs and a script. the script called time.sh looks like this

echo "Timing algorithm1:"
time algo1 >dev/null

echo "Timing algorithm2:" time algo2 >dev/null

echo "Timing algorithm3:" time algo3 >dev/null

...

Now for simplicity sake substitute ls for algo1 ... (I don't want to post code for each algorithm, force people to compile it ...)

echo " Timing ls"
time ls 2>&1 > /dev/null

call it time_ls.sh

then I do

sh time_ls.sh > ls_sh.meas

No matter what I do, whatever redirections I place in the script or commandline, I get one of two results. Either I get the output of echo ie "Timing ls" in the terminal and the timing data in ls_sh.meas or the opposite.

It's like the stdout and stderr don't want to get together and make one baby data file.

Can anyone explain this weird behaviour, and suggest a work around?

PS: This is in done in bash.

3 Answers3

1

In Korn-like shells, time is the keyword that introduces the:

time <pipeline>

construct. What time times is a pipe line, that is commands connected together with a pipe.

time for i in 1 2; do
       echo "$i"
     done 2>&1 | {
       read a
       read b
       ls /etc /x
     } 2> /dev/null
time ls 2>&1

Are examples of pipelines timed by time.

The time statistics are reported on the shell's stderr after the processes in the pipeline have returned, but in each the 2> ... are part of the pipeline being timed, they're not redirecting time's statistics output.

You need stderr to have been redirected before the time ... construct is evaluated. For instance with:

{ time <pipeline>; } 2> ...
eval 'time <pipeline>' 2> ...
exec 2> ...; time <pipeline>

To redirect only the time output and not the errors of the commands in the <pipeline>, you can save the previous stderr on a different fd and restore it within the pipeline being timed. For instance:

{
  time {
    <pipeline>
  } 2>&3 3>&-
} 3>&2 2> ...

To pipe that output to another command:

{
  {
    time {
      <pipeline>
    } 2>&3 3>&-
  } 3>&2 2>&1 >&4 4>&- | another command 4>&-
} 4>&-

Where we also need to restore stdout after having saved on fd 4.

To pass all of stdout, stderr and the time statistics to a command, it's enough to do:

{ time <pipeline>; } 2>&1 | a command
0

I think you want the output of time and the output of each prototype included in the same log file:

#!/bin/bash
# Name as "algon1"    (
    echo "Timing algorithm1:"
    time algo1 >/dev/null
echo &quot;Timing algorithm2:&quot;
time algo2 &gt;/dev/null

echo &quot;Timing algorithm3:&quot;
time algo3 &gt;/dev/null

) > algon1.log 2>&1

Then make the file executable and call it

chmod a+x algon1
./algon1

Or if you don't want the output file name hardcoded into the script, but instead just have the output written to stdout:

#!/bin/bash
# Name as "algon2"
(
    echo "Timing algorithm1:"
    time algo1 >/dev/null
echo &quot;Timing algorithm2:&quot;
time algo2 &gt;/dev/null

echo &quot;Timing algorithm3:&quot;
time algo3 &gt;/dev/null

) 2>&1

And

chmod a+x algon2
./algon2 | tee algon2.log

In both scripts you can remove the >/dev/null from each time line if you want the algorithms' output interspersed with the timings.

Chris Davies
  • 116,213
  • 16
  • 160
  • 287
0

Define a redirect at the beginning of a script:

#!/bin/bash
exec 2>&1

echo Timing ls 1 time ls &>/dev/null

echo Timing ls 2 time ls /jabberwocks &>/dev/null

./time_ls.sh > ls_sh.meas
nezabudka
  • 2,428
  • 6
  • 15