I have multiple CSV files with many records. The total number of rows should be 134. I have many files, and each rows has its own number of columns (from 15 to 200). I need to sort them according to their number of columns.
I was able to count a file's columns using:
$ awk -F"," '{print NF}' file # 1.csv
... which gives something like:
134
134
134
5
25
133
...
Now, I would like to add these numbers to each row, so that I can later sort my rows according to it. How can I add this information at the beginning of each, and sort?
I'd also like to split the files with value=134 into 1 file other by their respective count.
small INPUT file example (this is 3 rows):
2,"A.B.C.D",50,"SDf3oa701-ab73-a0pcs90","7012218969217-1413752517-32448","SDf3oa701-ab73-a0pcs90","SIP",,"<99999@sipgw5003.com>;tag=70122","<8888888@X.Y.Z.W>",17,0,"00:01:57.827 GMT Oct 20 2014","00:00:00.000 UTC Jan 01 1970","00:01:57.870 GMT Oct 20 2014",3,"sp3",1904,"sp3",1904,"realm_IN","realmTERM_OUT",,,,"::",0,"::",0,,"::",0,"::",0,0,0,0,0,0,0,0,0,0,0,,,,"::",0,"::",0,,"::",0,"::",0,0,0,0,0,0,0,0,0,0,0,,,,"::",0,"::",0,,"::",0,"::",0,0,0,0,0,0,0,0,0,0,0,,,,"::",0,"::",0,,"::",0,"::",0,0,0,0,0,0,0,0,0,0,0,,,"Sw-buildabcd","GMT-03:00",0,"8888888@X.Y.Z.W",,,,,,"X.Y.Z.W:50","A.S.D.F:50","A.S.D.F:50","A.S.D.F:50",,1,2,1,404,"8888888@A.S.D.F",,,4493101
2,"A.B.C.D",50,,,,4493105
2,"A.B.C.D",50,,"88888@B.D.S.E",,,4493106