The question is tagged sed and grep, so I assume there is interest in an answer that uses regular expressions. Also the question indicates the input data file is large and so I assume that performance is a consideration.
I also assume that given that the input file contains one filename per line that there will be no (pathological) filenames that contain newline characters.
The other answers effectively spawn a cp
process for every file. This causes unnecessary performance reduction. Instead we can use the facilities of xargs
to call cp
with as many filenames as it can fit on a command line.
sed -rn 's/ (5[1-9]|[6-9].|1..)$//p' input.txt | tr '\n' '\0' | xargs -0 cp -t /destdir
The sed
uses a regular expression to match the closed numerical interval (50, 200)
. Using regular expressions for numerical inequalities is not always the most elegant thing to do, but in this case the required expression is fairly straightforward.
We are assuming that the filenames contain no newlines, but they may contain other unhelpful characters, such as spaces. xargs
will handle this correctly if given \0
-delimited data, so we use tr
to convert all newlines to null characters.
The above assumes the GNU versions of sed
and xargs
. If instead you have BSD versions (e.g. OSX), then the command is slightly different:
sed -En 's/ (5[1-9]|[6-9].|1..)$//p' input.txt | tr '\n' '\0' | xargs -0 -J {} cp {} /destdir
These commands will spawn exactly one copy of sed
, tr
and xargs
. There will be more than one spawn of cp
, but each one will copy multiple files - xargs
will attempt to fill up each cp
command line to acheive efficient utilisation. This should provide a significant performance improvement over the other answers when the input data is large.
-
. – heemayl Oct 16 '16 at 22:58