44
echo -e 'one two three\nfour five six\nseven eight nine'
one two three
four five six
seven eight nine

how can I do some "MAGIC" do get this output?:

three
six
nine

UPDATE: I don't need it in this specific way, I need a general solution so that no matter how many columns are in a row, e.g.: awk always displays the last column.

LanceBaynes
  • 40,135
  • 97
  • 255
  • 351
  • 2
    Lance please research your questions before asking. Searching google for the subject line of your posts shows the answer in the snippents. Searching "awk last column" gives several great answers starting with result 1. Also, this 5 minute awk primer is worth reading all the way through so you know what's possible in the future. – Caleb Jul 20 '11 at 18:11
  • @Caleb The link is not working. – Idonknow May 15 '23 at 16:55

9 Answers9

86

Try:

echo -e 'one two three\nfour five six\nseven eight nine' | awk '{print $NF}'
kenorb
  • 20,988
Sean C.
  • 2,390
  • I updateted the Q – LanceBaynes Jul 20 '11 at 17:28
  • please note that awk is limited to 99 fields... :/ This just bite me in the past days ( ps -ef | awk '{ print $NF }' had some lines truncated...) Perl doesn't have that limitation. ( http://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Limitations-of-Usual-Tools.html : "Traditional Awk has a limit of 99 fields in a record. Since some Awk implementations, like Tru64's, split the input even if you don't refer to any field in the script, to circumvent this problem, set ‘FS’ to an unusual character and use split." ) – Olivier Dulac Jan 29 '14 at 08:48
  • @OlivierDulac what awk implementations have that limitation? I've never seen it. My mawk will choke on 32768 but my gawk and igawk can deal with millions happily. Even my busybox's awk can deal with millions. I've never come across an awk that can't deal with 100 fields, that's a tiny number, after all. Are you sure that information is still relevant? Even on Solaris? – terdon Oct 16 '15 at 09:15
  • @terdon, see the links in my comment ^^ (and believe me, some "legacy" system can survive a loooooooog time in some environments. on some, tar hapilly extract to "/", bash doesn't have some of the usefull builtins (nor $BASH_SOURCE, for example), awk choke on NF>99, etc ... :( ) – Olivier Dulac Oct 16 '15 at 13:10
  • @OlivierDulac fair enough. I just haven't come across it. I hope it's vanishingly rare today since 99 is a tiny number. – terdon Oct 16 '15 at 22:28
25

It's easier than you think.

$ echo one two three | awk '{print $NF}'
three
bahamat
  • 39,666
  • 4
  • 75
  • 104
17

Try grep (shorter/simpler, but 3x slower than awk because of regex usage):

grep -o '\S\+$' <(echo -e '... seven eight nine')

Or ex (even more slower, but it prints the whole buffer after finish, more useful when it needs to be sorted or edited in-place):

ex -s +'%s/^.*\s//g' -c'%p|q!' <(echo -e '... seven eight nine')
ex +'%norm $Bd0' -sc'%p|q!' infile

To change in-place, replace -sc'%p|q!' with -scwq.

Or bash:

while read line; do arr=($line); echo ${arr[-1]}; done < someinput

Performance

Given the generated 1GB file via:

$ hexdump -C /dev/urandom | rev | head -c1G | pv > datafile

I've performed the parsing time stats (ran ~3x and took the lowest, tested on MBP OS X):

  • using awk:

    $ time awk '{print $NF}' datafile > /dev/null
    real    0m12.124s
    user    0m10.704s
    sys 0m0.709s
    
  • using grep:

    $ time grep -o '\S\+$' datafile > /dev/null
    real    0m36.731s
    user    0m36.244s
    sys 0m0.401s
    
    $ time grep -o '\S*$' datafile > /dev/null
    real    0m40.865s
    user    0m39.756s
    sys 0m0.415s
    
  • using perl:

    $ time perl -lane 'print $F[-1]' datafile > /dev/null
    real    0m48.292s
    user    0m47.601s
    sys 0m0.396s
    
  • using rev + cut:

    $ time (rev|cut -d' ' -f1|rev) < datafile > /dev/null
    $ time rev datafile | cut -d' ' -f1 | rev > /dev/null
    real    1m10.342s
    user    1m19.940s
    sys 0m1.263s
    
  • using ex:

    $ time ex +'%norm $Bd0_' -sc'%p|q!' datafile > /dev/null
    real    3m47.332s
    user    3m42.037s
    sys 0m2.617s
    $ time ex +'%norm $Bd0' -sc'%p|q!' datafile > /dev/null
    real    4m1.527s
    user    3m44.219s
    sys 0m6.164s
    $ time ex +'%s/^.*\s//g' -sc'%p|q!' datafile > /dev/null
    real    4m16.717s
    user    4m5.334s
    sys 0m5.076s
    
  • using bash:

    $ time while read line; do arr=($line); echo ${arr[-1]}; done < datafile > /dev/null
    real    9m42.807s
    user    8m12.553s
    sys 1m1.955s
    
kenorb
  • 20,988
8

It can even be done only with 'bash', without 'sed', 'awk' or 'perl':

echo -e 'one two three\nfour five six\nseven eight nine' |
  while IFS=" " read -r -a line; do
    nb=${#line[@]}
    echo ${line[$((nb - 1))]}
  done
jfg956
  • 6,336
  • Hmm, or also, assuming your input is actually space-separated: ... | while read -r line; do echo ${line##* }; done – glenn jackman Jul 20 '11 at 19:21
  • @glenn: This was my first idea, but when I read the 'read' manual, I saw this array function which I found useful. It can also be easily modified to give any field indexed from the right. – jfg956 Jul 21 '11 at 07:54
  • 2
    bash array index is subject of arithmetic evaluation, so echo ${line[nb - 1]} is enough. As speaking about bash, you can just skip the “nb” things: echo ${line[-1]}. A more portable alternative of the later: echo ${line[@]: -1]}. (See Stephane Chazelas' comment on negative indexes elsewhere.) – manatwork Mar 18 '13 at 12:04
7
... | perl -lane 'print $F[-1]'
glenn jackman
  • 85,964
  • Key points: -a, autosplit fields into @F array; -l chops off $/ (input record separator) and saves it to $\ (output record separator). Because no octal number provided with -l, original $/ is applied in print (line endings); -n loop code; -e execute code immediately hereafter. See man perlrun. – Jonathan Komar Sep 24 '18 at 07:51
5

It can also be done using 'sed':

echo -e 'one two three\nfour five six\nseven eight nine' | sed -e 's/^.* \([^ ]*\)$/\1/'

Update:

or more simply:

echo -e 'one two three\nfour five six\nseven eight nine' | sed -e 's/^.* //'
jfg956
  • 6,336
5

Or using cut:

echo -e 'one two three\nfour five six\nseven eight nine' | cut -f 3 -d' '

although this does not satisfy the 'general solution' requirement. Using rev twice we can solve this as well:

echo -e 'one two three\nfour five six\nseven eight nine' | rev | cut -f 1 -d' ' | rev
Tim
  • 183
  • I do not think 'rev' can be found on all Unix (AIX, Solaris, ...) or is installed on all Linux, but nice alternative solution. – jfg956 Jul 21 '11 at 08:34
  • 1
    +1 for double rev, but as a side note, rev does not work with 'wide' characters, only single byte ones, as far as I know. – Marcin Jul 19 '12 at 17:47
2

In perl this can be done as follows:

#!/usr/bin/perl

#create a line of arbitrary data
$line = "1 2 3 4 5";

# splt the line into an array (we call the array 'array', for lolz)
@array = split(' ', $line);

# print the last element in the array, followed by a newline character;
print "$array[-1]\n";

output:

$ perl last.pl
5
$

You could also loop through a file, heres an example script I wrote to parse a file called budget.dat

example data in budget.dat:

Rent              500
Food              250
Car               300
Tax               100
Car Tax           120
Mag Subscription  15

(you can see i needed to capture only the "last" column, not column 2 only)

The script:

#!/usr/bin/perl
$budgetfile = "budget.dat";
open($bf, $budgetfile)
        or die "Could not open filename: $filename $!";


print "-" x 50, "\n";
while ( $row = <$bf> ) {
        chomp $row;
        @r = split (' ', $row);
        print "$row ";
        $subtotal += $r[-1];
        print "\t$subtotal\n";
}
print "-" x 50, "\n";
print "\t\t\t Total:\t$subtotal\n\n";
  • I realised someone else already commented the same, sorry about that, at least I have a few examples too, hopefully it adds to the discussion anyway. – urbansumo Apr 29 '15 at 16:01
2

Using awk you can first check if there is at least one column.

echo | awk '{if (NF >= 1) print $NF}'

echo 1 2 3 | awk '{if (NF >= 1) print $NF}'
gezu
  • 21