30

Many people use oneliners and scripts containing code along the lines

cat "$MYFILE" | command1 | command2 > "$OUTPUT"

The first cat is often called "useless use of cat" because technically it requires starting a new process (often /usr/bin/cat) where this could be avoided if the command had been

< "$MYFILE" command1 | command2 > "$OUTPUT"

because then shell only needs to start command1 and simply point its stdin to the given file.

Why doesn't the shell do this conversion automatically? I feel that the "useless use of cat" syntax is easier to read and shell should have enough information to get rid of useless cat automatically. The cat is defined in POSIX standard so shell should be allowed to implement it internally instead of using a binary in path. The shell could even contain implementation only for exactly one argument version and fallback to binary in path.

  • 22
    Those commands are not actually equivalent, since in one case stdin is a file, and in the other it's a pipe, so it wouldn't be a strictly safe conversion. You could make a system that did it, though. – Michael Homer Apr 11 '19 at 07:32
  • 1
    See also: https://stackoverflow.com/a/16619430/334451 – Mikko Rantalainen Apr 11 '19 at 07:35
  • @MichaelHomer I understand the technical difference between file and a pipe but I fail to imagine a process that would be a target of a pipeline but not accept a file instead of a pipe. Can you provide an example where you think this difference would be important? – Mikko Rantalainen Apr 11 '19 at 07:39
  • 1
    @MikkoRantalainen Going in that direction would probably be ok, unless the command, for whatever reason, tests that its input comes form a pipe and behaves differently if it does not. – Kusalananda Apr 11 '19 at 07:40
  • 14
    That you can't imagine a use case doesn't mean that an application isn't allowed to rely on the specified behaviour uselessly. Getting an error from lseek is still defined behaviour and could cause a different outcome, the different blocking behaviour can be semantically meaningful, etc. It would be allowable to make the change if you knew what the other commands were and knew they didn't care, or if you just didn't care about compatibility at that level, but the benefit is pretty small. I do imagine the lack of benefit drives the situation more than the conformance cost. – Michael Homer Apr 11 '19 at 07:57
  • 3
    The shell absolutely is allowed to implement cat itself, though, or any other utility. It's also allowed to know how the other utilities that belong to the system work (e.g. it can know how the external grep implementation that came with the system behaves). This is completely viable to do, so it's entirely fair to wonder why they don't. – Michael Homer Apr 11 '19 at 08:04
  • 6
    @MichaelHomer e.g. it can know how the external grep implementation that came with the system behaves So the shell now has a dependency on the behavior of grep. And sed. And awk. And du. And how many hundreds if not thousands of other utilities? – Andrew Henle Apr 11 '19 at 11:01
  • 19
    It would be pretty uncool of my shell to edit my commands for me. – Azor Ahai -him- Apr 11 '19 at 16:35
  • 1
    Do you have an estimate for what the performance cost is? Is it enough to be worth worrying about? – Owen Apr 11 '19 at 16:40
  • 1
    Do you really want a shell that's smarter than you are? Because to re-write "useless use of cat" or similar expressions, it would have to understand what your intent is. – jamesqf Apr 11 '19 at 17:24
  • 1
    @AndrewHenle POSIX specifies system and environment-level conformance, so it is fully within scope that the shell with the system knows the tools with the system. Conforming applications are allowed to rely on the specified utility behaviour, and the shell too, and the system is allowed to contain consistent extensions. Whether introducing that level of interdependency in practice is a good idea or not, well... – Michael Homer Apr 11 '19 at 18:46
  • There are shells (e.g. ksh93) which do implement some external commands internally. I believe they check that the command found by searching $PATH is the system command (/bin/cat) before using the internal copycat command. – jrw32982 Apr 11 '19 at 19:44
  • 1
    If the shell starts silently changing your input to something else, you lose the predictability of your toolset and will no longer get reliable feedback to your input, because the changed input may give a different and unexpected output. – Mio Rin Apr 12 '19 at 08:45
  • 1
    I like the "useless use of cat" cat "$MYFILE" | command1 | command2 > "$OUTPUT" because it's defensive programming which makes explicit to even the most junior user exactly what's happening. – RonJohn Apr 12 '19 at 15:21
  • And I like it because it puts all commands except the initial "useless" cat in the same syntactic and semantic context by consistently using pipes. It's easier to read and think about, and makes it easier to add additional commands before the first "real" one. – ApproachingDarknessFish Apr 12 '19 at 20:46
  • 2
    @RonJohn, ...even at the cost of a huge performance penalty? sort "$MYFILE" can split into a bunch of threads each reading and sorting a different subset of a file into a different temporary file, and then merge them together at the end; cat "$MYFILE" | sort is forced to read front-to-back. And cat "$MYFILE" | tail on a 5GB file needs to read the whole 5GB front-to-back to get to the end, but GNU tail is smart enough to jump to the end and read from there in 8KB chunks if given a seekable file handle (as by tail "$MYFILE" or tail <"$MYFILE"). – Charles Duffy Apr 12 '19 at 22:18
  • 3
    @RonJohn, ...which is to say, I don't see what's "defensive" about passing the program you're running a FIFO rather than a direct handle on whatever you want it to read. I definitely can see an argument that <"$filename" sort is safer than sort "$filename", but using cat is precluding optimizations, giving your programs less information (can't look up the name of the input file when it's a completely separate program that has the real handle on it!), hiding information about failure cases, and otherwise making life worse for the program reading that content. – Charles Duffy Apr 12 '19 at 22:45
  • 3
    @ApproachingDarknessFish, if you want to make it easier to put other commands on front, put your redirections first. <file sort is just as valid as sort <file, and both let sort read direct from the real input file, and thus be able to seek/parallelize/etc. – Charles Duffy Apr 12 '19 at 22:46
  • @CharlesDuffy To interpret that what I wrote means and only ever means "I always cat files into programs" is... absurd in the extreme. sort "$MYFILE" | command1 | command2 is a heck of a lot more straightforward than <"$filename" sort. – RonJohn Apr 13 '19 at 00:31
  • 2
    @RonJohn, "more straightforward", maybe, but you need to worry about whether the filename starts with a dash and could thus be read as an argument list. You're the person arguing for defensive programming here -- <"$MYFILE" will never treat the contents of the MYFILE variable as anything but a filename. BTW, all-caps variable names are in the namespace POSIX specifies for names meaningful to the OS and standard utilities, whereas lowercase variable names are guaranteed to be safe for application use. – Charles Duffy Apr 13 '19 at 00:46
  • 1
    @RonJohn, ...see http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html, keeping in mind that environment and shell variables share a single namespace (setting a shell variable will overwrite any like-named environment variable). Quoting the tail end of the relevant paragraph: The name space of environment variable names containing lowercase letters is reserved for applications. Applications can define any environment variables with names from this name space without modifying the behavior of the standard utilities. – Charles Duffy Apr 13 '19 at 00:47
  • 1
    @RonJohn, ...moreover, you were making an argument that it's easier to train juniors when you have a consistent set of practices. If you need to know which tools do and don't handle from seekable handles, that's a bunch of extra knowledge someone needs to have when writing or reviewing code, vs passing seekable handles whenever feasible. – Charles Duffy Apr 13 '19 at 00:52
  • @CharlesDuffy "but using cat is precluding optimizations, giving your programs less information". cat "$MYFILE" | command1 | command2 > "$OUTPUT" is explicit about what's happening. Sure, it's less efficient, but efficiency isn't always the primary goal in programming. Lack of bugs and maintainability for years to come are quite often the higher priority. – RonJohn Apr 13 '19 at 00:52
  • 1
    @RonJohn, I agree that efficiency isn't the primary goal, but it's certainly a goal; some level of reasonable performance is essential for suitability-to-task. Look at the move away from init scripts at boot time towards consolidation in compiled-in functionality (a la systemd) -- a lot of why that's happening is because scripts are so frequently written using practices which are literally orders-of-magnitude slower than would be the case if some care were used. That's the case with tail -- piping from cat changes an O(1) algorithm into an O(n) one. – Charles Duffy Apr 13 '19 at 15:29
  • 1
    @RonJohn, ...and I don't see any basis for your claim that cat "$MYFILE" | command1 is less bug-prone than <"$MYFILE" command1. Please substantiate -- with your formulation, command1 can't tell if end-of-file was hit, or if there was an EIO or other read error; it loses its ability to do good error handling, and also loses the ability to look up the filename attached to the FD to include it in an error message. That's making your software less reliable and maintainable, not more. – Charles Duffy Apr 13 '19 at 15:36
  • 1
    I'd suggest a reword of the question to be "why isn't cat x | prog the same as < x prog" if this really is opinion based, though I'm not sold that it is – UKMonkey Apr 15 '19 at 09:26

11 Answers11

53

"Useless use of cat" is more about how you write your code than about what actually runs when you execute the script. It's a sort of design anti-pattern, a way of going about something that could probably be done in a more efficient manner. It's a failure in understanding of how to best combine the given tools to create a new tool. I'd argue that stringing several sed and/or awk commands together in a pipeline also sometimes could be said to be a symptom of this same anti-pattern.

Fixing instances of "useless use of cat" in a script is a primarily matter of fixing the source code of the script manually. A tool such as ShellCheck can help with this by pointing out the obvious cases:

$ cat script.sh
#!/bin/sh
cat file | cat
$ shellcheck script.sh

In script.sh line 2:
cat file | cat
    ^-- SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead.

Getting the shell to do this automatically would be difficult due to the nature of shell scripts. The way a script executes depends on the environment inherited from its parent process, and on the specific implementation of the available external commands.

The shell does not necessarily know what cat is. It could potentially be any command from anywhere in your $PATH, or a function.

If it was a built-in command (which it may be in some shells), it would have the ability to reorganise the pipeline as it would know of the semantics of its built-in cat command. Before doing that, it would additionally have to make assumptions about the next command in the pipeline, after the original cat.

Note that reading from standard input behaves slightly differently when it's connected to a pipe and when it's connected to a file. A pipe is not seekable, so depending on what the next command in the pipeline does, it may or may not behave differently if the pipeline was rearranged (it may detect whether the input is seekable and decide to do things differently if it is or if it isn't, in any case it would then behave differently).

This question is similar (in a very general sense) to "Are there any compilers that attempt to fix syntax errors on their own?" (at the Software Engineering StackExchange site), although that question is obviously about syntax errors, not useless design patterns. The idea about automatically changing the code based on intent is largely the same though.

Kusalananda
  • 333,661
  • It's perfectly conformant for a shell to know what cat is, and the other commands in the pipeline, (the as-if rule) and behave accordingly, they just don't here because it's pointless and too hard. – Michael Homer Apr 11 '19 at 07:40
  • 4
    @MichaelHomer Yes. But it's also allowed to overload a standard command with a function of the same name. – Kusalananda Apr 11 '19 at 07:42
  • 2
    @PhilipCouling It’s absolutely conformant as long as it’s known that none of the pipeline commands care. The shell is specifically allowed to replace utilities with builtins or shell functions and those have no execution environment restrictions, so as long as the external result is indistinguishable it’s permitted. For your case, cat /dev/tty is the interesting one that would be different with <. – Michael Homer Apr 11 '19 at 09:28
  • 1
    @MichaelHomer so as long as the external result is indistinguishable it’s permitted That means the behavior of the entire set of utilities optimized in such a manner can never change. That has to be the ultimate dependency hell. – Andrew Henle Apr 11 '19 at 10:58
  • 3
    @MichaelHomer As the other comments said, of course it's perfectly comformant for the shell to know that given the OP's input it is impossible to tell what the cat command actually does without executing it. For all you (and the shell) know, the OP has a command cat in her path which is an interactive cat simulation, "myfile" is just the stored game state, and command1 and command2 are postprocessing some statistics about the current playing session... – alephzero Apr 11 '19 at 12:44
  • @alephzero Well, no, the shell is allowed to recognise utilities it finds (by checksum, say), as part of a conforming POSIX environment (ie the whole system), and then it’s entirely possible to determine what the overall pipeline does. In any case it’s even explicitly allowed to replace them with conforming builtins regardless of whether they are the same tool - upon finding an executable in the PATH, it is under no obligation to run it. You need to provide an absolute path for that. It is well into language lawyer territory at this point, though. – Michael Homer Apr 11 '19 at 18:32
  • @MichaelHomer can you clarify your /dev/tty example above? – jrw32982 Apr 11 '19 at 19:37
  • @jrw32982 cat /dev/tty | foo vs foo < /dev/tty affects isatty and e.g. interactive modes, and similarly for output. The now-deleted comment I was replying to was something about ls | cat or vice-versa, I think. – Michael Homer Apr 11 '19 at 20:13
  • @MichaelHomer thanks, I see what you're saying now. { [[ -t 0 ]] && echo tty; } </dev/tty vs. cat /dev/tty | { [[ -t 0 ]] || echo not tty; } (cat has to be terminated in one way or another in the second case). – jrw32982 Apr 12 '19 at 03:34
36

Because it's not useless.

In the case of cat file | cmd, the fd 0 (stdin) of cmd will be a pipe, and in the case of cmd <file it may be a regular file, device, etc.

A pipe has different semantics from a regular file, and its semantics are not a subset of those of a regular file:

  • a regular file cannot be select(2)ed or poll(2)ed on in a meaningful way; a select(2) on it will always return "ready". Advanced interfaces like epoll(2) on Linux will simply not work with regular files.

  • on Linux there are system calls (splice(2), vmsplice(2), tee(2)) which only work on pipes [1]

Since cat is so much used, it could be implemented as a shell built-in which will avoid an extra process, but once you started on that path, the same thing could be done with most commands -- transforming the shell into a slower & clunkier perl or python. it's probably better to write another scripting language with an easy to use pipe-like syntax for continuations instead ;-)

[1] If you want a simple example not made up for the occasion, you can look at my "exec binary from stdin" git gist with some explanations in the comment here. Implementing cat inside it in order to make it work without UUoC would have made it 2 or 3 times bigger.

  • 2
    In fact, ksh93 does implement some external commands like cat internally. – jrw32982 Apr 11 '19 at 19:40
  • 4
    cat /dev/urandom | cpu_bound_program runs the read() system calls in a separate process. On Linux for example, the actual CPU work of generating more random numbers (when the pool is empty) is done in that system call, so using a separate process lets you take advantage of a separate CPU core to generate random data as input. e.g. in What's the fastest way to generate a 1 GB text file containing random digits? – Peter Cordes Apr 12 '19 at 01:00
  • 4
    More importantly for most cases, it means lseek won't work. cat foo.mp4 | mpv - will work, but you can't seek backward further than mpv's or mplayer's cache buffer. But with input redirected from a file, you can. cat | mpv - is one way to check if an MP4 has its moov atom at the start of the file, so it can be played without seeking to the end and back (i.e. if it's suitable for streaming). It's easy to imagine other cases where you want to test a program for non-seekable files by running it on /dev/stdin with cat vs. a redirect. – Peter Cordes Apr 12 '19 at 01:03
  • This is even more true when using xargs cat | somecmd. If file paths extend beyond the command buffer limit, xargs can run cat multiple times resulting in a continuous stream, while using xargs somecmd directly often fails because somecmd cannot be run in multiples to achieve a seamless result. – tasket Apr 13 '19 at 00:08
27

The 2 commands are not equivalent: consider error handling:

cat <file that doesn't exist> | less will produce an empty stream that will be passed to the piped program... as such you end up with a display showing nothing.

< <file that doesn't exist> less will fail to open bar, and then not open less at all.

Attempting to change the former to the latter could break any number of scripts that expect to run the program with a potentially blank input.

UKMonkey
  • 386
  • 1
    I'll mark your response as accepted because I think this is the most important difference between both syntaxes. The variant with cat will always execute the second command in the pipeline whereas the variant with just input redirection will not execute the command at all if the input file is missing. – Mikko Rantalainen Apr 13 '19 at 08:43
  • However, note that <"missing-file" grep foo | echo 2 will not execute grep but will execute echo. – Mikko Rantalainen Apr 16 '19 at 16:49
17

Because detecting useless cat is really really hard.

I had a shell script where I wrote

cat | (somecommand <<!
...
/proc/self/fd/3
...
!) 0<&3

The shell script failed in production if the cat was removed because it was invoked via su -c 'script.sh' someuser. The apparently superfluous cat caused the owner of standard input to change to the user the script was running as so that reopening it via /proc worked.

jlliagre
  • 61,204
Joshua
  • 1,893
  • This case would be pretty easy because it clearly does not follow the simple model of cat followed by exactly one parameter so the shell should use real cat executable instead of optimized shortcut. Good point on possibly different credentials or non-standard stdin for real processes, though. – Mikko Rantalainen Apr 13 '19 at 08:47
13

tl;dr: Shells don't do it automatically because the costs exceed the likely benefits.

Other answers have pointed out the technical difference between stdin being a pipe and it being a file. Keeping that in mind, the shell could do one of:

  1. Implement cat as a builtin, still preserving the file v. pipe distinction. This would save the cost of an exec and maybe, possibly, a fork.
  2. Perform a full analysis of the pipeline with knowledge of the various commands used to see if file/pipe matters, then act based on that.

Next you have to consider the costs and benefits of each approach. The benefits are simple enough:

  1. In either case, avoid an exec (of cat)
  2. In the second case, when redirect substitution is possible, avoidance of a fork.
  3. In cases where you have to use a pipe, it might be possible sometimes to avoid a fork/vfork, but often not. That's because the cat-equivalent needs to run at the same time as the rest of the pipeline.

So you save a little CPU time & memory, especially if you can avoid the fork. Of course, you only save this time & memory when the feature is actually used. And you're only really saving the fork/exec time; with larger files, the time is mostly the I/O time (i.e., cat reading a file from disk). So you have to ask: how often is cat used (uselessly) in shell scripts where the performance actually matters? Compare it to other common shell builtins like test — it's hard to imagine cat is used (uselessly) even a tenth as often as test is used in places that matter. That's a guess, I haven't measured, which is something you'd want to do before any attempt at implementation. (Or similarly, asking someone else to implement in e.g., a feature request.)

Next you ask: what are the costs. The two costs that come to mind are (a) additional code in the shell, which increases its size (and thus possibly memory use), requires more maintenance work, is another spot for bugs, etc.; and (b) backwards compatibility surprises, POSIX cat omits a lot of features of e.g., GNU coreutils cat, so you'd have to be careful exactly what the cat builtin would implement.

  1. The additional builtin option probably isn't that bad — adding one more builtin where a bunch already exist. If you had profiling data showing it'd help, you could probably convince your favorite shell's authors to add it.

  2. As for analyzing the pipeline, I don't think shells do anything like this currently (a few recognize the end of a pipeline and can avoid a fork). Essentially you'd be adding a (primitive) optimizer to the shell; optimizers often turn out to be complicated code and the source of a lot of bugs. And those bugs can be surprising — slight changes in the shell script could wind up avoiding or triggering the bug.

Postscript: You can apply a similar analysis to your useless uses of cat. Benefits: easier to read (though if command1 will take a file as an argument, probably not). Costs: extra fork and exec (and if command1 can take a file as an argument, probably more confusing error messages). If your analysis tells you to uselessly use cat, then go ahead.

derobert
  • 109,670
11

The cat command can accept - as a marker for stdin. (POSIX, "If a file is '-', the cat utility shall read from the standard input at that point in the sequence.") This allows simple handling of a file or stdin where otherwise this would be disallowed.

Consider these two trivial alternatives, where the shell argument $1 is -:

cat "$1" | nl    # Works completely transparently
nl < "$1"        # Fails with 'bash: -: No such file or directory'

Another time cat is useful is where it's intentionally used as a no-op simply to maintain shell syntax:

file="$1"
reader=cat
[[ $file =~ \.gz$ ]] && reader=zcat
[[ $file =~ \.bz2$ ]] && reader=bzcat
"$reader" "$file"

Finally, I believe the only time that UUOC can really be correctly called out is when cat is used with a filename that is known to be a regular file (i.e. not a device or named pipe), and that no flags are given to the command:

cat file.txt

In any other situation the oroperties of cat itself may be required.

Chris Davies
  • 116,213
  • 16
  • 160
  • 287
6

The cat command can do things that the shell can't necessarily do (or at least, can't do easily). For example, suppose you want to print characters that might otherwise be invisible, such as tabs, carriage returns, or newlines. There *might* be a way to do so with only shell builtin commands, but I can't think of any off the top of my head. The GNU version of cat can do so with the -A argument or the -v -E -T arguments (I don't know about other versions of cat, though). You could also prefix each line with a line number using -n (again, IDK if non-GNU versions can do this).

Another advantage of cat is that it can easily read multiple files. To do so, one can simply type cat file1 file2 file3. To do the same with a shell, things would get tricky, although a carefully-crafted loop could most likely achieve the same result. That said, do you really want to take the time to write such a loop, when such a simple alternative exists? I don't!

Reading files with cat would probably use less CPU than the shell would, since cat is a pre-compiled program (the obvious exception is any shell that has a builtin cat). When reading a large group of files, this might become apparent, but I have never done so on my machines, so I can't be sure.

The cat command can also be useful for forcing a command to accept standard input in instances it might not. Consider the following:

echo 8 | sleep

The number "8" will be not accepted by the "sleep" command, since it was never really meant to accept standard input. Thus, sleep will disregard that input, complain about a lack of arguments, and exit. However, if one types:

echo 8 | sleep $(cat)

Many shells will expand this to sleep 8, and sleep will wait for 8 seconds before exiting. You can also do something similar with ssh:

command | ssh 1.2.3.4 'cat >> example-file'

This command with append example-file on the machine with the address of 1.2.3.4 with whatever is outputted from "command".

And that's (probably) just scratching the surface. I'm sure I could find more example of cat being useful if I wanted to, but this post is long enough as it is. So, I'll conclude by saying this: asking the shell to anticipate all of these scenarios (and several others) is not really feasible.

Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
3

Remember that a user could have a cat in his $PATH which is not exactly the POSIX cat (but perhaps some variant which could log something somewhere). In that case, you don't want the shell to remove it.

The PATH could change dynamically, and then cat is not what you believe it is. It would be quite difficult to write a shell doing the optimization you dream of.

Also, in practice, cat is a quite quick program. There are few practical reasons (except aesthetics) to avoid it.

See also the excellent Parsing POSIX [s]hell talk by Yann Regis-Gianas at FOSDEM2018. It gives other good reasons to avoid attempting doing what you dream of in a shell.

If performance was really an issue for shells, someone would have proposed a shell which uses sophisticated whole program compiler optimization, static source code analysis, and just-in-time compilation techniques (all these three domains have decades of progress and scientific publications and dedicated conferences, e.g. under SIGPLAN). Sadly, even as an interesting research topic, that is not currently funded by research agencies or venture capitalists, and I am deducing that it is simply not worth the effort. In other words, there is probably no significant market for optimizing shells. If you have half a million euro to spend on such research, you'll easily find someone to do it, and I believe it would give worthwhile results.

On a practical side, rewriting, to improve its performance, a small (un hundred lines) shell script in any better scripting language (Python, AWK, Guile, ...) is commonly done. And it is not reasonable (for many software engineering reasons) to write large shell scripts: when you are writing a shell script exceeding a hundred lines, you do need to consider rewriting it (even for readability and maintenance reasons) in some more suitable language: as a programming language the shell is a very poor one. However, there are many large generated shell scripts, and for good reasons (e.g. GNU autoconf generated configure scripts).

Regarding huge textual files, passing them to cat as a single argument is not good practice, and most sysadmins know that (when any shell script takes more than a minute to run, you begin considering optimizing it). For large gigabytes files, cat is never the good tool to process them.

  • 3
    "Quite few practical reasons to avoid it" -- anyone who's waited for cat some-huge-log | tail -n 5 to run (where tail -n 5 some-huge-log could jump straight to the end, whereas cat reads only front-to-back) would disagree. – Charles Duffy Apr 12 '19 at 22:22
  • Comment checks out ^ cating a large text file in tens of GB range ( which was created for testing ) takes kinda long time. Wouldn't recommend. – Sergiy Kolodyazhnyy Apr 13 '19 at 01:03
  • 1
    BTW, re: "no significant market for optimizing shells" -- ksh93 is an optimizing shell, and a quite good one. It was, for a while, successfully sold as a commercial product. (Sadly, being commercially licensed also made it sufficiently niche that poorly-written clones and other less-capable-but-free-of-cost successors took over the world outside of those sites willing to pay for a license, leading to the situation we have today). – Charles Duffy Apr 13 '19 at 18:36
  • (not using the specific techniques you note, but frankly, those techniques don't make sense given the process model; the techniques it does apply are, well, well applied and to good effect). – Charles Duffy Apr 13 '19 at 18:42
2

Adding to @Kusalananda answer (and @alephzero comment), cat could be anything:

alias cat='gcc -c'
cat "$MYFILE" | command1 | command2 > "$OUTPUT"

or

echo 'echo 1' > /usr/bin/cat
cat "$MYFILE" | command1 | command2 > "$OUTPUT"

There is no reason that cat (on its own) or /usr/bin/cat on the system is actually cat the concatenate tool.

Rob
  • 121
  • 3
    Other than the behaviour of cat is defined by POSIX and so shouldn't be wildly different. – Chris Davies Apr 11 '19 at 14:13
  • 2
    @roaima: PATH=/home/Joshua/bin:$PATH cat ... Are you sure you know what cat does now? – Joshua Apr 11 '19 at 17:53
  • 1
    @Joshua it doesn't really matter. We both know cat can be overridden, but we also both know it shouldn't be wantonly replaced with something else. My comment points out that POSIX mandates a particular (subset of) behaviour that can reasonably be expected to exist. I have, at times, written a shell script that extends behaviour of a standard utility. In this case the shell script acted and behaved just like the tool it replaced, except that it had additional capabilities. – Chris Davies Apr 11 '19 at 21:12
  • @Joshua: On most platforms, shells know (or could know) which directories hold executables that implement POSIX commands. So you could just defer the substitution until after alias expansion and path resolution, and only do it for /bin/cat. (And you'd make it an option you could turn off.) Or you'd make cat a shell built-in (which maybe falls back to /bin/cat for multiple args?) so users could control whether or not they wanted the external version the normal way, with enable cat. Like for kill. (I was thinking that bash command cat would work, but that doesn't skip builtins) – Peter Cordes Apr 12 '19 at 01:13
  • If you provide an alias, the shell will know that cat in that environment no longer refers to the usual cat. Obviously, the optimization should be implemented after the aliases have been processed. I consider shell built-ins to represent commands in virtual directory that is always prepended to your path. If you want to avoid shell built-in version of any command (e.g. test) you have to use a variant with a path. – Mikko Rantalainen Apr 13 '19 at 08:52
1

Two "useless" uses for cat:

sort file.txt | cat header.txt - footer.txt | less

...here cat is used to mix file and piped input.

find . -name '*.info' -type f | sh -c 'xargs cat' | sort

...here xargs can accept a virtually infinite number of filenames and run cat as many times as needed while making it all behave like one stream. So this works for large file lists where direct use of xargs sort does not.

tasket
  • 225
  • Both of these use cases would be trivially avoided by making shell built-in only step-in if cat is called with exactly one argument. Especially the case where sh is passed a string and xargs will call cat directly there's no way the shell could use it's built-in implementation. – Mikko Rantalainen Apr 13 '19 at 08:55
0

Aside from other things, cat-check would add additional performance overhead and confusion as to which use of cat is actually useless, IMHO, because such checks can be inefficient and create problems with legitimate cat usage.

When commands deal with the standard streams, they only have to care about reading/writing to the standard file descriptors. Commands can know if stdin is seekable/lseekable or not, which indicates a pipe or file.

If we add to the mix checking what process actually provides that stdin content, we will need to find the process on the other side of the pipe and apply appropriate optimization. This can be done in terms of shell itself, as shown in the SuperUser post by Kyle Jones, and in terms of shell that's

(find /proc -type l | xargs ls -l | fgrep 'pipe:[20043922]') 2>/dev/null

as shown in the linked post. This is 3 more commands (so extra fork()s and exec()s) and recursive traversals ( so whole lot of readdir() calls ).

In terms of C and shell source code, the shell already knows the child process,so there's no need for recursion, but how do we know when to optimize and when cat is actually useless ? There are in fact useful uses of cat, such as

# adding header and footer to file
( cmd; cat file; cmd ) | cmd
# tr command does not accept files as arguments
cat log1 log2 log3 | tr '[:upper:]' '[:lower:]'

It would probably be waste and unnecessary overhead to add such optimization to the shell. As Kusalanda's answer mentioned already, UUOC is more about user's own lack of understanding of how to best combine commands for best results.