-1

Pipe (|) and redirections (<, <<, >, >>) both using standard streams (stdin, stdout, stderr), but although only pipe can keep sudo privileges, why?

Works:

sudo echo "hello" | tee /root/test

Doesn't work:

sudo echo "hello" > /root/test
Chris Davies
  • 116,213
  • 16
  • 160
  • 287
linuxer
  • 27
  • 3
    sudo echo "hello" | tee /root/test doesn't work to access the file if sudo is needed for that. In each of the cases, consider which program is opening the file? – ilkkachu Aug 28 '23 at 19:29

2 Answers2

1

Pipe (|) and redirections (<, <<, >, >>) both using standard streams (stdin, stdout, stderr), but altough only pipe can keep sudo prividgles, why?

That's simply not true. You must be mixing things up

sudo echo "hello" | tee /root/test

here echo is run as root, but tee is run as your current user, which doesn't seem to be root.

This would be different

echo "hello" | sudo tee /root/test

here, the tee program is executed as root, and hence gets access to /root/test as file

0

Redirection (>, <, etc) and piplines (|, etc) are initialized by the parent process before any of your commands is run.

Once the parent process decides to run something, e.g. sudo ls /root | grep test, it creates two processes, setting their Standard I/O Steams (STDIN,STDOUT, STDERR) appropriately.

For the process that will tun sudo, its STDOUT is connected to the STDIN of the process that will run grep.

After this setup (using the parent process's UID:GID) the sudo and grep binaries are loaded into their processes and executed.

Programs can simply read STDIN, write STDOUT, and send errors to STDERR, and leave the "plumbing" to the parent process.

This is a major design feature of Unix/Linux. I've programmed systems with Job Control Languages that made me specify all those inter-program connections via temporary storage. Ugh.

waltinator
  • 4,865
  • Before? If I run e.g. sudo ls /root | grep test, how parent process can know the output before sudo ls /root is ran? – linuxer Aug 29 '23 at 16:30
  • @linuxer Expanded my answer with a slightly more detailed, handwavey explation of How It Works. – waltinator Aug 29 '23 at 17:13
  • Very good explanation, but are you sure that bash initializes all redirection and pipeslines before any binary is executed? AFAIK bash doesn't limit the length of the commands so pipelines can be hunderds. Then IMO is more reasonable that bash run firstly ls command, fork() new process, copy the ls commands stdout to new process stdin, exec() grep "in" that new process. And yeah, is the way, that you explained or this me, grep doesn't get sudo privileges ...I was just stupid :D – linuxer Aug 29 '23 at 17:36
  • @linuxer, yes, it's the shell that processes redirections, and they're processed before the commands involved are executed. But the shell does fork a new child process first, so it can set up the redirections there, and then execute the command to run in that child process. Also the shell doesn't copy or otherwise touch any data that passes between the two. It just sets up the connection. I can't tell what you're referring with "the length of the commands" in the context of pipes. – ilkkachu Aug 29 '23 at 21:09
  • @linuxer Everything has limits, that's why we have xargs. Do: xargs --show-limits – waltinator Aug 31 '23 at 00:33
  • @ilkkachu: By "the length of the commands" I mean the fact that command can be "ls | grep | cut | ...hundred of commands here... | cat". So, if I make very big command with hundreds of programs sequentially, bash needs initializ hundreds of process before executing those programs and it sounds stupid. Why bash initializ all programs before executing? Are you sure that bash does as you explained and not like I explained? – linuxer Sep 14 '23 at 03:30
  • @waltinator: What? And what that command does? – linuxer Sep 14 '23 at 03:30
  • I've never, in my 55+ years experience seen an example of, or need for, your "...hundreds of commands here ..." pipeline. Can you provide an example that's not glaringly bad code? – waltinator Sep 14 '23 at 05:37
  • Consider the simple case: cmd1 | cmd2. The parent shell sets up the connection between cmd1's STDOUT and cmd2s STDIN. Both processes must exist for this to happen. After this connection is made, the parent shell has no more to do with it. If cmd1 writes STDOUT constanly, and cmd2 does not read from STDIN, evrntually, cmd1 will fill its buffers, and will "block" (become temporarily unrunnable) until cmd2 reads its STDIN. In this fashion, data flows, in lockstep from command to command all the way down the pipeline. Sorry this is long. – waltinator Sep 14 '23 at 05:52
  • To learn more about the bash shell, explore https://www.gnu.org/software/bash/ You can even download and read rhe source. – waltinator Sep 14 '23 at 05:59
  • @linuxer, the point is that the processes run in parallel, not sequentially. (Of course the data flow passes through each sequentially.) You'll have to decide out for yourself what kind of pipelines make sense. With a hundred processes, the communication overhead might start to hurt, so maybe don't set up pipelines like that. Also, if you're parsing the output of ls is a bit fishy since it's not able to output all filenames unambiguously – ilkkachu Sep 14 '23 at 07:01
  • @waltinator & message "I've never, in my 55+ years experience seen...": Agree, but AFAIK it's possible and in those situations it seems silly if bash spawn a hundred processes before execute any of the programs. That's why I asked, is it really the case that bash initializes every redirections/pipes before executing any binary? Do you now see my question? – linuxer Sep 17 '23 at 02:35
  • @waltinator & message "Consider the simple case: cmd1...": Why cmd1 fill its buffer in that case? cmd2 is in the same RAM I think. And what blocks cmd1 in that case? The kernel? "in lockstep"? – linuxer Sep 17 '23 at 02:42
  • @ilkkachu & message "the point is that the proce...": I think you misunderstood my question. I understand almost all in the waltinator's answer and wondering just one simple thing. So, totally makes sense that bash need initialize redirections before extecuting the command, but in case that user is piped multiple commands it's sound curious if bash makes that for all of those piped commands before executing any of them. Sounds more reasonable if bash in that case firstly initializes process for first command, then execute first command, then initializes process for second command and so on. – linuxer Sep 17 '23 at 02:57