The following perl
script opens each file specified on the command line, storing the filehandle for each in an array. Then it repeatedly reads and prints up to 4 lines at a time from each file (checking for EOF each time, decrementing a counter $numopen
each time it reaches the EOF of a file) until there are no files left with unread lines.
It doesn't bother closing the file handles because perl
automatically closes all open files on exit.
#!/usr/bin/perl
use strict;
my @filehandles=();
my $files=0;
# open each input file
foreach my $filename (@ARGV) {
open($filehandles[$files++], "<", $filename) ||
die "Couldn't open '$filename': $!";
}
$files--;
my $numopen = $files;
# print up to 4 lines at a time from each file
while ($numopen > 0) {
for my $i (0..$files) {
if (!eof($filehandles[$i])) {
for (1..4) {
if (!eof($filehandles[$i])) {
print scalar readline($filehandles[$i]);
} else {
$numopen--;
}
}
}
}
}
Save this script as, e.g., interleave4.pl
make it executable with chmod +x interleave4.pl
and run it as, ./interleave4.pl File[1-7]
This script has been tested by creating 7 files with the following bash one-liner.
for i in {1..7}; do printf "File$i %s\n" {1..10} > "File$i"; done
Some of the files were then edited so that they didn't all have the same number (10) of lines, to make sure the script would cope gracefully with that situation (it does - it just moves on to the next file without complaint). Similarly, it also has no problem dealing with input files with line counts that aren't evenly divisible by 4.
Note: this script could easily be modified so that the number of lines to print on each pass through the main loop wasn't a hard-coded 4, but was taken as an option on the command line.