-1
cat file_1.txt

100 200 name

100 200

100 200

150 300

150 300

150 250

150 250

150 300 name


final file should be 

150 300

150 300

150 250

150 250

i am using this command

cat file_1.txt | grep -v "name">file_2.txt; cat file_2.txt| while read line;do cat file_1.txt| grep "$line"| head -1|grep -v "name" ;done

but data is too much cant use while loop it is taking to much time cat use while loop . is there any other way to do it fast like using "grep -vf file_1 file_2" something like that By passing like that . please let me know.

here i am using the logic of getting the top most line of pattern found pattern is every line of "file_1.txt"

prince 987
  • 74
  • 1
  • 7
  • Could you please use code tags when posting code. Also, it would be helpful if you could rephrase your question; because in its current state it is very hard to understand what it is you want to achieve. –  Sep 26 '16 at 06:17

1 Answers1

1

It appears to me that your code is printing out every line of file_1.txt unless either (a) the line contains name or (b) the first two columns of the line are the same as a previous line that contained name. In that case, try:

$ awk '/name/{bad[$1,$2];next} !(($1,$2) in bad)' file_1.txt 
150 300
150 300
150 250
150 250

How it works

  • /name/{bad[$1,$2];next}

    If the current line contains name, then we add an entry to the associative array bad under the key of the first two columns. We then skip the rest of the commands and jump to start over on the next line.

  • !(($1,$2) in bad)

    If the first two columns of the current line, $1,$2 are not among the keys of bad, then print this line.

John1024
  • 74,655