When a Linux command gets multiple files as input. Is there any command/trick to split command's stdout
into sections. One for each file? So, it becomes visually clear which lines belong to which file?
For example, when commands such as cat
, grep
, sed
, sort
, tail
, get multiple files as input. Generally, they print their results continuously on stdout
, But I want to split the result into sections. One for each of the input files
I do NOT want to split the result into files, or split it by the number of lines as it is asked in this question.
The closest thing that I found to it is using find -print -exec command
as it is explained here. But still not quite what I wanted.
Assuming that my command works on file1
, file2
, and file3
, as input, and prints the following:
line1 of file1
line2 of file1
line3 of file1
line1 of file2
line2 of file2
line3 of file2
line1 of file3
line2 of file3
line3 of file3
I want the output to be formatted something similar to the following, so when I observed what I was looking for in a file, I can jump to the next file:
>>> file1 <<<
line1 of file1
line2 of file1
line3 of file1
----------------
>>> file2 <<<
line1 of file2
line2 of file2
line3 of file2
----------------
>>> file3 <<<
line1 of file3
line2 of file3
line3 of file3
Thanks in advance.
for f in "file1 file2 file"; do echo ">>> $f <<<"; your_command $f; done
– dirkt Apr 14 '20 at 04:11