Let's compare all proposed solutions!
I have a text file test.txt
of size ~230M. I'm on Mac Mini, updated to 10.10.
1) awk
solution by Hauke Laging (better not...):
$ time bash -c "awk '/a/ && /b/ && /c/' >> /dev/null"
19.51 real 19.23 user 0.20 sys
2) "bruteforced" grep
by Raghuraman R and Hauke Laging (better, but not really...):
$ time bash -c "grep -e 'a.*b.*c' -e 'a.*c.*b' -e 'b.*a.*c' -e 'b.*c.*a' -e 'c.*a.*b' -e 'c.*b.*a' test.txt >> /dev/null"
10.02 real 9.93 user 0.07 sys
3) chained grep
by muru (ok!):
$ time bash -c "grep a test.txt | grep b | grep c >> /dev/null"
1.61 real 3.08 user 0.29 sys
4) perl
solution by terdon (even better!):
$ time bash -c "perl -ne 'print if /a/ && /b/ && /c/' test.txt >> /dev/null"
0.83 real 0.75 user 0.07 sys
So, I think "chained grep" is ok, but you can also use Perl for even better performance.
I could not test sed
approach, because the program provided by Costas does not work "as is" in mac os console.
BTW I'm no expert on benchmarking, sorry if I did something wrong.