16

I have a shell script which uses find -print0 to save a list of files to be processed into a temporary file. As part of the logging I'd like to output the number of files found, and so I need a way to get that count. If the -print0 option weren't being used for safety I could use wc -l to get the count.

qqx
  • 2,588
  • 1
  • 19
  • 16

3 Answers3

18

Some options:

tr -cd '\0' | wc -c

tr '\n\0' '\0\n' | wc -l # Generic approach for processing NUL-terminated # records with line-based utilities (that support # NUL characters in their lines like GNU ones).

grep -cz '^' # GNU grep

sed -nz '$=' # recent GNU sed, no output for empty input

awk -v RS='\0' 'END{print NR}' # not all awk implementations

Note that for an input that contains data after the last NUL character (or non-empty input with no NUL characters), the tr approaches will always count the number of NUL characters, but the awk/sed/grep approaches will count an extra record for those extra bytes.

  • I measured these on 5 GB of random data (head -c 5G /dev/urandom > f). Results: grep 1.7s (same for grep -Fcz '') • tr+wc-c 7.7s • tr+wc-l 7.4s • sed 34.7s • awk 1m11.7s – Socowi Apr 23 '20 at 13:24
  • @Socowi, YMMV with the implementation and locale. With GNU awk, you'll want to set the locale to C (or any that doesn't use multibyte characters), LC_ALL=C awk ... < f – Stéphane Chazelas Apr 23 '20 at 15:45
  • Thanks for the hint. I already used LC_ALL=C on sort where it didn't speed things up, therefore Luckily I have still have the file from before: LC_ALL=C awk ... takes 6.7s. – Socowi Apr 23 '20 at 15:50
5

The best method I've been able to think of is using grep -zc '.*'. This works, but it feels wrong to use grep with a pattern which will match anything.

qqx
  • 2,588
  • 1
  • 19
  • 16
2

With perl:

perl -0ne 'END {print $.}'

or:

perl -nle 'print scalar split "\0"'

or:

perl -nle 'print scalar unpack "(Z*)*", $_'
cuonglm
  • 153,898