Suppose you have a binary that has to be run on maaaaaaaaany files (suppose files are numbered from 1 to N). Each file has to be processed by making a call to this binary (say.... something like md5sum). Each run will save the result on a separate file. So.... if we have 1000 files and we only have 4 CPUs, we don't want to do something like (if at all possible, actually):
i=0; while [ $i -lt 1000 ]; do md5sum a_file_$i > result_$i & i=$(( $i + 1 )); done
Because (even if bash wouldn't complain), we would end up starting 1000 processes that will make the computer go into crawling mode.
Is there a command available that I could use where I can tell said command that it has to be run like n processes at a time (start n processes, monitors when a process finishes and then starts another so that the number of processes running is always n)?