This can be done with find
(and xargs
),
but it won’t win any beauty contests.
Write a script called check_files
:
#!/bin/sh
find "$@" -size +800M –print
Then run
xargs -d '\n' < xyz/symlinks_paths.txt ./check_files
where
- You can move
< xyz/symlinks_paths.txt
redirection
to the end of the command line,
as in xargs -d '\n' ./check_files < xyz/symlinks_paths.txt
,
or to the beginning, or anywhere else.
Or you can replace it with -a xyz/symlinks_paths.txt
.
Any of these mean that xargs
will read from xyz/symlinks_paths.txt
.
- You can replace
./check_files
with an absolute pathname to your check_files
script.
-d '\n'
means use newline as the delimiter
when reading xyz/symlinks_paths.txt
.
You can probably leave this off
if your filenames don’t contain whitespace (space(s) or tab(s)),
quotes (remember that a single quote ('
)
is the same character as an apostrophe)
or backslashes,
and you’re willing to wager a year’s salary that they never ever will.
This reads each line of the file
and makes it an argument to the check_files
script,
which passes them to find
as starting-point arguments.
Many people know that you can run find
with multiple starting-point arguments; e.g.,
find dir1 dir2 dir3 search-expression
It’s not so well known that those arguments don’t have to be directories;
they can be files; e.g.,
find file1 file2 file3 search-expression
(or a mixture of directories and files).
find
will simply apply the expression
to each file named as a starting-point.
So this checks each file whose name is listed in xyz/symlinks_paths.txt
to see whether its size is 800M or more, and prints those that are.
If the filenames might refer to symbolic links
(as the xyz/symlinks_paths.txt
name suggests)
and you want to look at the pointed-to files (which you surely do),
change find
to find -L
.
You don’t need to have a separate check_files
script; you can do
xargs -d '\n' < paths.txt sh -c 'find "$@" -size +800c -print' sh
Again, change find
to find -L
if desired.
line3
by any chance come from having done anls
on a directory? Is the overall purpose to get files that are larger than 800M? If so,find dir -type f -size +800M
does that (indir
). No need to parsels
or callgrep
or loop, or explicitly test variables or anything. – Kusalananda Feb 20 '18 at 06:29exit 1
? do you only want to print the name of the first file inxyz/symlinks_paths.txt
that's greater than 800MB? – cas Feb 20 '18 at 07:00exit 1
inside the loop. i.e. you're telling the script to exit after printing the first match. if you don't want it to do that, then don't tell it to. BTW, you don't need anelse
condition. Another way of looking at that is "the defaultelse
condition is to do nothing". – cas Feb 20 '18 at 09:23exit 1
part and i dont want to write the else part. Justif [ true ] then ... fi
? willif
work withoutelse
? i think yes. I will experiment. Thanks. :) – Pompy Feb 20 '18 at 09:28find
was my first thought too - but the files to examine are listed in a file calledxyz/symlinks_paths.txt
. Admittedly, if the only thing that file is used for is this while loop then the OP could just usefind
as you suggest. Also, I suspect that the list is a list of symlinks, not regular files, so use-type l
rather than-type f
. – cas Feb 20 '18 at 09:32