You don't need to do anything special: it's the kernel's job to decide which thread goes on which CPU, and it does a far better job than a human could.
However, there is no point of having 24 CPUs if you don't have at least 24 concurrent threads to run. Programs won't magically go faster if more CPUs are available: only programs that are coded to have multiple parallel threads will benefit, and many programs won't benefit, not because they are written in an inferior way, but because what they do is inherently not parallelizable.
A program with N concurrent computation threads will benefit from up to N CPUs (though it might not go N times faster, because synchronization between the threads takes time). Running M different programs that don't interact much if at all similarly takes advantage of M CPUs (or more than that if the programs are multi-threaded).
There are a few cases where manual intervention is necessary to take advantage of parallelism. If you're starting multiple data processing tasks, take care that they're spawned in parallel (with slightly over one task per CPU) rather than one after the other. For example, when building software, pass the -j
option to make
. See a few other examples and explanations:
If you're running a web server, all web servers designed for heavy loads are good at exploiting parallelism. Apache is used as a test case when evaluating the performance of optimizations in the Linux kernel. Beware however that parallelism in the CPU only helps if there is no other bottleneck, such as contention due to database access or input-output bandwidth.
xargs -P
, GNU parallel or just run them in the background. – Marco Sep 26 '13 at 21:44