I have a file with a lot of lines like this
0 file:/home/knappen/somefilename.txt 7 0.2838356973995272 19 0.21823286052009455 18 0.10121158392434988 15 0.07816193853427897 11
0.07284278959810875 6 0.056885342789598115 8 0.03738179669030733 22 0.032062647754137114 23 0.01610520094562648 12 0.01610520094562648 16 0.010786052009456266 0 0.010786052009456266 13 0.009013002364066195 5 0.009013002364066195 10 0.007239952718676124 9 0.007239952718676124 14 0.005466903073286052 4 0.005466903073286052 21 0.003693853427895981 20 0.003693853427895981 17 0.003693853427895981 3 0.003693853427895981 2 0.003693853427895981 1
0.003693853427895981
and I want to select all rows where the entry in the third columns equals to some given number.
I know how to write a pattern for grep -E
for this purpose or to write a small python or perl script with this effect, but I wonder whether there is an elegant solution using GNU coreutils.
P.S. I found some answers with good suggestions in this question Selecting rows in a CSV file based on column value, but the tools are beyond GNU coreutils. The answers there are good enough to work for me, but for the sake of learning more about the power of the shell utilities I ask this question anyhow.