I noticed that the operator ">>" doesn't work well in my script and I don't know why. I have a script like this:
for file in $(ls folder)`
do
echo $file >> text.txt
done
Into folder I got 91 elements, but only the first 87 elements were inserted into file.txt. I can't figure out what is wrong with this code, can anybody help me to understand, please?
EDIT
The script I wrote up there is very simplified, but I understood that it don't give a clear picture of the situation. So, here more details:
Into my folder I got 91 csv files that contains each one two columns: name and value. For every file I need to control if this value is greater than 2.500 and if is not 0.000. If one of this 2 condition is true, I append the file name and its value to a txt file that contains my discarded files, otherwise I append the file name and its value in a csv file that contains my chosen files. The code I use to control the value works well, but when I use >> to append the results in the txt or csv file, the last four results aren't appended and I can't understand why.
for file in $(ls folder)
do
value=$(cat path/$file | awk -F, '{print $2}')
discard=$(awk -v num1="$value" 'BEGIN { if (num1 > 2.500) print 1; else if (num1 == 0.000) print 0; else print 2 }')
if [[ $discard -eq 1 || $discard -eq 0 ]]
then
echo ""$file" has value="$value"" >> path/discard.txt
rm path/"$file"
else
echo ""$file",$value" >> path/selected.csv
rm path/"$file"
fi
done
This is a more complete version of my script.
EDIT 2
I corrected my script to fix the issue you've find in it. Still same problem. To be more clear, the files in my folder are csv files automatically generated by a program and they contains only a row with 2 columns: an ID and a float value. They all are very similar, so the problem isn't in there, also because I can see from terminal the script recognizes them and processes them well. I still don't know why append doesn't put the last 4 four files into the txt file.
for file in folder/*
do
value=$(cat "$file" | awk -F, '{print $2}')
discard=$(awk -v num1="$value" 'BEGIN { if (num1 > 2.500) print 1; else if (num1 == 0.000) print 0; else print 2 }')
if [[ "$discard" -eq 1 || "$discard" -eq 0 ]]
then
echo ""$file" has value="$value"" >> path/discard.txt
rm -- "$file"
else
echo ""$file",$value" >> path/selected.csv
rm -- "$file"
fi
done
Reducing the number of files, it works perfectly by the way.
ls
problematic, many other things can influence whatecho
does. The commandecho
is unreliable and depends not only on what is in the variablefile
, but also on the flavor of Unix and the shell used itself. – Vilinkameni Sep 22 '23 at 15:37cat -A file
orod -c file
print for those files? – ilkkachu Sep 22 '23 at 19:27ls
there, the issues from using it would show up as error messages when the script would try to open files with wrong names. Similarly forecho
, it should be easy to notice if the output was corrupt (and the issue would really only come if the data contains backslashes, or if you had filenames like-e foo
, with the space.) – ilkkachu Sep 22 '23 at 19:31for file in folder/*
and double quote your variables when you use them – Chris Davies Sep 22 '23 at 20:11for i in {1..91}; do echo "1,2" > folder/file$i.txt; done
, then it doesn't work, but with 87 identical ones, it does? Or are you? (I just tried the exact script you posted with that exact command, it works ok and produces 91 lines of output.) I would still suggest looking very closely at the actual data. Or heck, just post the problematic set of files somewhere. – ilkkachu Sep 24 '23 at 17:43set -x
into your script and analyze the trace. This would also help you providing a simple reproducible example. – user1934428 Sep 26 '23 at 06:45