I suspect the why has a lot to do with the vision/design that shaped Unix (and consequently Linux), and the advantages stemming from it.
No doubt there's a non-negligible performance benefit to not spinning up an extra process, but I think there's more to it: Early Unix had an "everything is a file" metaphor, which has a non-obvious but elegant advantage if you look at it from a system perspective, rather than a shell scripting perspective.
Say you have your null
command-line program, and /dev/null
the device node. From a shell-scripting perspective, the foo | null
program is actually genuinely useful and convenient, and foo >/dev/null
takes a tiny bit longer to type and can seem weird.
But here's two exercises:
Let's implement the program null
using existing Unix tools and /dev/null
- easy: cat >/dev/null
. Done.
Can you implement /dev/null
in terms of null
?
You're absolutely right that the C code to just discard input is trivial, so it might not yet be obvious why it's useful to have a virtual file available for the task.
Consider: almost every programming language already needs to work with files, file descriptors, and file paths, because they were part of Unix's "everything is a file" paradigm from the beginning.
If all you have are programs that write to stdout, well, the program doesn't care if you redirect them into a virtual file that swallows all writes, or a pipe into a program that swallows all writes.
Now if you have programs that take file paths for either reading or writing data (which most programs do) - and you want to add "blank input" or "discard this output" functionality to those programs - well, with /dev/null
that comes for free.
Notice that the elegance of it is that is reduces the code complexity of all involved programs - for each common-but-special usecase that your system can provide as a "file" with an actual "filename", your code can avoid adding custom command-line options and custom code paths to handle.
Good software engineering often depends on finding good or "natural" metaphors for abstracting some element of a problem in a way that becomes easier to think about but remains flexible, so that you can solve basically the same range of higher-level problems without having to spend the time and mental energy on reimplementing solutions to the same lower-level problems constantly.
"Everything is a file" seems to be one such metaphor for accessing resources: You call open
of a given path in a heirarchical namespace, getting a reference (file descriptor) to the object, and you can read
and write
, etc on the file descriptors. Your stdin/stdout/stderr are also file descriptors that just happened to be pre-opened for you. Your pipes are just files and file descriptors, and file redirection lets you glue all these pieces together.
Unix succeeded as much as it did in part because of how well these abstractions worked together, and /dev/null
is best understood as part of that whole.
P.S. It's worth looking at the Unix version of "everything is a file" and things like /dev/null
as the first steps towards a more flexible and powerful generalization of the metaphor that has been implemented in many systems that followed.
For example, in Unix special file-like objects like /dev/null
had to be implemented in the kernel itself, but it turns out that it's useful enough to expose functionality in file/folder form that since then multiple systems have been made that provide a way for programs to do that.
One of the first was the Plan 9 operating system, made by some of the same people who made Unix. Later, GNU Hurd did something similar with its "translators". Meanwhile, Linux ended up getting FUSE (which has spread to the other mainstream systems by now as well).
cat foo | bar
is much worse (at scale) thanbar <foo
.cat
is a trivial program, but even a trivial program creates costs (some of them specific to FIFO semantics -- because programs can'tseek()
inside FIFOs, for example, a program that could be implemented efficiently with seeking can end up doing much more expensive operations when given a pipeline; with a character device like/dev/null
it can fake those operations, or with a real file it can implement them, but a FIFO doesn't allow any kind of contextually-aware handling). – Charles Duffy Apr 16 '18 at 19:46grep blablubb file.txt 2>/dev/null && dosomething
could not work with null being a program or a function. – rexkogitans Apr 16 '18 at 20:39read
function for/dev/null
consists of a "return 0" (meaning it doesn't do anything and, I suppose, results in an EOF): (From static https://github.com/torvalds/linux/blob/master/drivers/char/mem.c)ssize_t read_null(struct file *file, char __user *buf, size_t count, loff_t *ppos) { return 0; }
(Oh, I just see that @JdeBP made that point already. Anyway, here is the illustration :-). – Peter - Reinstate Monica Apr 17 '18 at 16:39/dev/null
(as it does not store any data, but simply disregards it). Also, increased resource usage of extra one file descriptor is infinitesimally small compared to resource usage even of small program using it, so that can't be problem either. So, please explain what you think is a problem with many programs accessing/dev/null
? – Matija Nalis Apr 19 '18 at 09:35null
constant interferes with the file namespace. Sometimes when a Windows user gives me eg. a USB dongle for some data I will add a file callednul
on it as well - deleting that is not an easy task: https://stackoverflow.com/questions/17883481/delete-a-file-named-nul-on-windows – j_kubik Apr 22 '18 at 21:40