23

I was wondering what is the fastest way to run a script , I've been reading that there is a difference in speed between showing the output of the script on the terminal, redirecting it to a file or perhaps /dev/null.

So if the output is not important , what is the fastest way to get the script to work faster , even if it's minim .

bash ./myscript.sh 
-or-
bash ./myscript.sh > myfile.log
-or-
bash ./myscript.sh > /dev/null
Kingofkech
  • 1,028

3 Answers3

33

Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved.

Consequently ./script.sh is slower than ./script.sh >script.log, which in turn is slower than /script.sh >/dev/null, because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections.

Edit: Some quick (and completely broken) benchmarks:

  • In a Linux console, 240x75:

    $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)
    real    3m52.053s
    user    0m0.617s
    sys     3m51.442s
    
  • In an xterm, 260x78:

    $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)
    real    0m1.367s
    user    0m0.507s
    sys     0m0.104s
    
  • Redirect to a file, on a Samsung SSD 850 PRO 512GB disk:

     $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file)
     real    0m0.532s
     user    0m0.464s
     sys     0m0.068s
    
  • Redirect to /dev/null:

     $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null)
     real    0m0.448s
     user    0m0.432s
     sys     0m0.016s
    
Satō Katsura
  • 13,368
  • 2
  • 31
  • 50
  • my script is printing 3million lines over 5 to 7 hours , so it's better to use it with > /dev/null ? – Kingofkech Oct 12 '17 at 10:28
  • 6
    @Kingofkech that's less than 200 lines/second. It wouldn't make much of a difference. (For comparison, timeout 1 yes "This is a looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong line" prints 100000+ lines in a single second on my MBP.) – muru Oct 12 '17 at 10:38
  • 4
    @Kingofkech If this is a script then edit a file and comment out the line which prints unnecessary output. You will gain a lot, especially if this is an external command (not shell built-in) executed 3 million times... – jimmij Oct 12 '17 at 10:52
  • 2
    For a narrow interpretation of "used to be". A vt220 terminal is way slower than today's terminal emulators. Also, SUN Sparc workstations (the ones I used) had a really slow console, so just redirecting the output to a file when compiling a larger project would speed the compile time up immensely. – Kusalananda Oct 12 '17 at 10:55
  • @Kusalananda you said that "so just redirecting the output to a file when compiling a larger project would speed the compile time up immensely. " how much % is this immensely ? faster of 10% 20 % maybe ? – Kingofkech Oct 12 '17 at 11:00
  • 1
    @Kusalananda That's true, xterms on HP Apollo from 30 years ago used to crawl compared to xterms on HP-UX from 20 years ago. However, a Linux console with a Matrox video card from 15 years ago was slower than the same Linux console with an S3 card from 20 years ago. And a Linux console with high-resolution frame buffer on a modern card is simply unusable. :) – Satō Katsura Oct 12 '17 at 11:02
  • @Kingofkech I don't recall any longer, only that it was noticeable. I made no measurements. On the Sparc, you could almost see the screen redraw, so I think the speedup was quite large. This was in the 90s though. With the vt220 (which I sadly got rid of five years ago), the speedup would be more than 50%, easily and possibly >100% for large amounts of output. The serial speed to the terminal was really limited. – Kusalananda Oct 12 '17 at 11:03
  • i'm not used to old computers but a quick search on youtube and this video showed how much the screen was drawing the lines line by line – Kingofkech Oct 12 '17 at 11:05
  • 6
    @Kingofkech That's about 2400 bps. Some of us actually lived for years with those speeds. :) – Satō Katsura Oct 12 '17 at 11:08
  • @Kingofkech Even at the maximum 115200 baud, it would be noticeably slower than what we're used to today. (did not watch the video) – Kusalananda Oct 12 '17 at 11:23
  • @Kusalananda A notable difference is that serial terminals had to redraw the entire line when they had something to print, while modern terminals can cheat and redraw only the dirty pixels. – Satō Katsura Oct 12 '17 at 12:35
  • I wonder how telling it would be to take the 3million lines @Kingofkech says are output, and time simply echoing them to a terminal. Could you then subtract that from the 5-7 hours to get a rough estimate of the savings? hrm... would printfing them be different? How about just cating the piped output? SIGH, so many experiments, so little time... – A C Oct 12 '17 at 20:18
  • i've been trying the command given by @muru and it turns out quite well , the command is timeout 1 yes "This is a looooooooooooooooooooooooooooooooooooooooooooooooooooooooooo‌​oooooooong line" and the results are : in jsut 1 sec , and using an ssd , i got over 1GB worth of lines if i redirect my output > file.txt, but in the case of printing on the screen i got more then 10000 lines for sure – Kingofkech Oct 13 '17 at 08:17
  • @Kingofkech and I imagine anything that outputs so much less than 100k lines/s won't be affected much. – muru Oct 13 '17 at 08:40
  • @muru exactly :p the processors are so much faster now , they can do a lot faster, a lot faster that's why . – Kingofkech Oct 13 '17 at 09:01
13

I would have instinctively agreed with Satō Katsura's answer; it makes sense. However, it's easy enough to test.

I tested writing a million lines to the screen, writing (appending) to a file, and redirecting to /dev/null. I tested each of these in turn, then did five replicates. These are the commands I used.

$ time (for i in {1..1000000}; do echo foo; done)
$ time (for i in {1..1000000}; do echo foo; done > /tmp/file.log) 
$ time (for i in {1..1000000}; do echo foo; done > /dev/null)

I then plotted the total times below.

plot of time vs. output

As you can see, Satō Katsura's presumptions were correct. As per Satō Katsura's answer, I also doubt that the limiting factor will be the output, so it's unlikely that the choice of output will have a substantial effect on the overall speed of the script.


FWIW, my original answer had different code, which had the file appending and /dev/null redirect inside the loop.

$ rm /tmp/file.log; touch /tmp/file.log; time (for i in {1..1000000}; do echo foo >> /tmp/file.log; done) 
$ time (for i in {1..1000000}; do echo foo > /dev/null; done)

As John Kugelman pointed out in the comments at the time of writing this, this adds a lot of overhead. As the question stands, this is not really the right way to test it, but I'll leave it here as it clearly shows the cost of re-opening a file repeatedly from within the script itself.

plot of time vs. output

In this case, the results are reversed.

Cadoiz
  • 276
Sparhawk
  • 19,941
  • FWIW I added a quick benchmark to my answer. Notably, Linux console is >200 times slower than an xterm, which in turn is ~3 times slower than /dev/null. – Satō Katsura Oct 12 '17 at 12:26
  • You should also test with some rate limiting. OP's output is about 200 lines/s. – muru Oct 12 '17 at 14:20
  • @muru Do you mean print a line, wait 1/200 seconds, then repeat? I can try, but I figure it'll be similar results, but just take much longer for the signal to overcome the noise. Although maybe I can subtract the waiting time before analysis. – Sparhawk Oct 13 '17 at 02:57
  • @Sparhawk something like that. I think that at that level of output, the CPU will have plenty of time to update the display without slowing down output rate. When the program is doing nothing but spewing lines without pause, the terminal buffers will get filled up faster than display can be updated and creates a bottleneck. – muru Oct 13 '17 at 03:00
3

Another way to speed up a script is to use a faster shell interpreter. Compare the speeds of a POSIX busy loop, run under bash v4.4, ksh v93u+20120801, and dash v0.5.8.

  1. bash:

    time echo 'n=0;while [ $n -lt 1000000 ] ; do \
                      echo $((n*n*n*n*n*n*n)) ; n=$((n+1)); 
                   done' | bash -s > /dev/null
    

    Output:

    real    0m25.146s
    user    0m24.814s
    sys 0m0.272s
    
  2. ksh:

    time echo 'n=0;while [ $n -lt 1000000 ] ; do \
                      echo $((n*n*n*n*n*n*n)) ; n=$((n+1)); 
                   done' | ksh -s > /dev/null
    

    Output:

    real    0m11.767s
    user    0m11.615s
    sys 0m0.010s
    
  3. dash:

    time echo 'n=0;while [ $n -lt 1000000 ] ; do \
                      echo $((n*n*n*n*n*n*n)) ; n=$((n+1)); 
                   done' | dash -s > /dev/null
    

    Output:

    real    0m4.886s
    user    0m4.690s
    sys 0m0.184s
    

A subset of commands inbash and ksh are backwardly compatible to all of the commands in dash. A bash script that only uses commands in that subset should work with dash.

Some bash scripts that use new features can be converted to another interpreter. If the bash script relies heavily on newer features, it may not be worth the bother -- some new bash features are improvements which are both easier to code and more efficient, (despite bash being generally slower), so that the dash equivalent, (which might involve running several other commands), would be slower.

When in doubt, run a test...

agc
  • 7,223
  • so in order to fast my bash script , i need to rewrite them in bash or ksh ? – Kingofkech Oct 18 '17 at 09:10
  • @Kingofkech It may be that the code you wrote for bash is readily correct ksh or dash code. Try to just change the interpreter. – bli Oct 18 '17 at 10:50