0

Assume I like to execute the command foo which expects a specific file as an argument. I like to execute that command for a temporary purpose only so I like to have mentioned file as a temporary file, too.

In my specific use case I'd create the file temporary, execute the command with the temporary file and delete the file afterwards. This works fine.

Is there a way to pass that file with some kind of stream handler (borrowed speech from programming languages), so I don't have to create that file as a real file just temporary?

What I'm thinking about is some sort of foo "$( echo some content mimicking a file )", which I know, in fact, won't work but describes what I like to achieve.


Edit: My actual use case

I have multiple ansible roles to provision my entire system from scratch. Sometimes I have the need to run run one specific role for update purposes only. So actually I'm writing a temporary playbook, executing it and deleting it afterwards.

    cat > ~/path/to/provisioning/scripts/tmp.yml << EOF
- name: Executing the tasks \`tmp\`
  hosts: localhost
  become: yes
  roles:
    - apt
    - ${1}
EOF
    ansible-playbook ~/path/to/provisioning/scripts/tmp.yml
    rm ~/path/to/provisioning/scripts/tmp.yml
  • Thx for mentioning the exactly related question. As far as I understand from its answere, comments and the linked Wiki article I only have a temporary file and the writing process won't be killed, just blocked, after 64k. The last one means I have a backgrounded process constantly writing data to that pipe and the consuming process can read it, which removes the data from top of the pipe (what FIFO exactly is) and the writing process continues writing. The temporary file created ... where does it remains? As far as I understand it should be of size 0 bytes while the data is stored in memory. – codekandis Dec 02 '19 at 21:52
  • There's no temporary file: process substitution (as in foo <(bar)) does not use temporary files, but pipes. Pipes are not can cannot be implemented with temporary files -- not even conceptually. –  Dec 03 '19 at 09:02
  • @mosvy Then please, how to undestand this - https://unix.stackexchange.com/a/63933/250324 >[...] Although the pipe exists as a file node on disk, the data which passes through it does not; it all takes place in memory. [...] – codekandis Dec 03 '19 at 12:05
  • @mosvy I tested now. mkfifo named_pipe and I found prw-r--r-- ... named_pipe. So I end up with the same problem I intended to prevent with my question. Creating a file / named pipe, processing data, deleting the file / named pipe. In fact it's the same costs than. – codekandis Dec 03 '19 at 12:09
  • Please give an example of what you're trying to achieve: feel free to use any language you're comfortable with. Explain why simply echo content | foo or echo content | foo /dev/stdin or foo <(echo content1) <(echo content2) won't do. –  Dec 03 '19 at 17:12
  • That answer shows how to use a named pipe. While there are some systems where bash behind the scenes will use named pipes to implement process substitutions (FreeBSD, AIX), you don't have to care about it, and on most systems (Linux, OpenBSD, Solaris) it will use anonymous pipes (as created with the pipe(2) system call), exploiting the fact that they're still accessible in the file system via the /dev/fd mechanism: foo <(bar) will turn into foo /dev/fd/63 or similar, with /dev/fd/63 opening the same file the file descriptor 63 refers to in the foo process. –  Dec 03 '19 at 17:13
  • I added my actual use case. As it's clear it'd be the same script as with a named pipe, except the additional mkfifo one liner. – codekandis Dec 04 '19 at 08:09
  • You really should tag the roles in your playbooks. Then you can restrict a playbook run to a set of tags (e.g., https://stackoverflow.com/a/38384205/2072269) – muru Dec 04 '19 at 08:18

0 Answers0