If you want to count every word in any number of files you could use AWK
e.g.:
awk 'BEGIN{RS="[[:space:]]+"}
{counts[$0]++}
END{for(word in counts){print word " - " counts[word]}
' file1 file2 file...
This treats a file as if every word were on a separate line, that's the BEGIN{RS="[[:space:]]+"}
part, then counts each time it sees a line.
Removing the BEGIN
portion would count each normal line.
If you're only interested in 1 specific word, you could change the END
block to look something like:
END{print counts["esr"]}
Which would print only the times "esr" shows up, but remember that this is case-sensitive.
To remove case-sensitivity, use counts[tolower($0)]++
or counts[toupper($0)]++
.
Checks can be added to print out data when the count goes from one file to the next as well.
sort <file | uniq -c
would be enough. – αғsнιη Aug 28 '17 at 14:31