How many jobs does make
use by default when you don't pass it the -j
flag?

- 93,103
- 40
- 240
- 233

- 1,410
2 Answers
Seems to me that it is definitely 1, if by default you mean, without the -j
switch. Evidently (from the man page):
If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.
I've always specified a number and so have not noticed this. But with no switch, by empirical observation, it's one.
"Unlimited" would be a questionable value to use as a plain default, I think.

- 87,661
- 30
- 204
- 262
-
You are right, unlimited was a brainfart as it's clearly not that. And yes I meant without -j specified. – jsj Feb 26 '13 at 16:46
-
Unlimited would totally make sense… if only the system was good enough to schedule tasks taking memory consumption into account (unfortunately, unixes does not require processes to provide memory consumption estimates or other resource-consumption-related guaranties). – Stéphane Gimenez Feb 26 '13 at 16:52
-
I seem to remember the "unlimited" is really "number of CPUs", but might be totally off track. – vonbrand Feb 26 '13 at 16:59
-
@StéphaneGimenez : I can't recall hearing of an OS that does what you are talking about, but I'd be glad to be enlightened. "Requiring processes to provide memory consumption estimates" sounds like a profoundly bad idea to me, as it is simply an impossible metric to produce in a very high percentage of cases. Half your processes are going to have to report BS or random numbers, making whatever bases its operation on that prone to some kind of grotesque dysfunction...or else I'm wrong, lol. – goldilocks Feb 26 '13 at 17:10
-
None that I know of. But many processes could compute in a couple of cylcles how much memory they need according to the size of their input(s). If they were able to tell the system (simplified example: they might malloc() everything they need and claim explicitely that they now drop their privilege to malloc()), the system could then use better scheduling. – Stéphane Gimenez Feb 26 '13 at 17:24
-
@StéphaneGimenez Doing that via the OS would mean the system would just have to refuse to start processes based on their estimate (it certainly can't delay that arbitrarily, no userland system can work that way). So it would be a sort of predictive OOM killer; that might be desirable, but it won't lead to higher performance, it would be prone to less, since it would be permitting fewer processes and not more. It would also encourage userland abuse to manipulate the kernel's choices. However, a pure userspace batch manager (like make) which did this might have some merits. Some probably do. – goldilocks Feb 26 '13 at 17:45
-
Not also that in general, the size of input cannot be known or even crudely estimated before a process starts. – goldilocks Feb 26 '13 at 17:47
The default is 1.
Source:
Normally, make will execute only one recipe at a time, waiting for it to finish before executing the next.
from the section 5.4 Parallel Execution of the GNU make manual, https://www.gnu.org/software/make/manual/html_node/Parallel.html. You can also get it from the command line by typing info make parallel

- 41
-
It’s not clear that this adds much to what the accepted answer says. – Scott - Слава Україні Jan 18 '20 at 06:40
-
2I wanted to provide the official source of the default job number which I didn't find in the accepted answer. – Stefan Jan 18 '20 at 23:22
-
-
1When I want to run the optimal amount, I use my tool makeMax: https://gitlab.com/es20490446e/makeMax – Alberto Salvia Novella May 24 '21 at 06:39