31

I have file with 200 lines.

I need to extract lines from 10 to 100 and put them into a new file.

How do you do this in unix/Linux?

What are the possible commands you could use?

sunil
  • 311

4 Answers4

58

Use sed:

sed -n -e '10,100p' input.txt > output.txt

sed -n means don't print each line by default. -e means execute the next argument as a sed script. 10,100p is a sed script that means starting on line 10, until line 100 (inclusive), print (p) that line. Then the output is saved into output.txt.

If your file is longer than suggested, this version (suggested in the comments) will be faster:

sed -e '1,9d;100q'

That means delete lines 1-9, quit after line 100, and print the rest. For 200 lines it's not going to matter, but for 200,000 lines the first version will still look at every line even when it's never going to print them. I prefer the first version in general for being explicit, but with a long file this will be much faster — you know your data best.

Alternatively, you can use head and tail in combination:

tail -n +10 input.txt | head -n 91 > output.txt

This time, tail -n +10 prints out the entire file starting from line 10, and head -n 91 prints the first 91 lines of that (up to and including line 100 of the original file). It's redirected to output.txt in the same way.

Michael Homer
  • 76,565
  • 3
    That's probably better like sed '1,9d;100q'. Faster anyway. – mikeserv Jun 21 '14 at 06:22
  • Fair comment. For 200 lines it doesn't matter and I prefer the more explicit version, but if it's much longer quitting (or head/tail) is definitely better. I've edited in the option anyway. – Michael Homer Jun 21 '14 at 06:30
  • 1
    head | tail will still win every time - especially because they only each do half the job but do it concurrently. – mikeserv Jun 21 '14 at 06:37
  • @mikeserv but that's using 2 commands, sed alone should be faster – Creek Jun 21 '14 at 06:41
  • Not for 91 lines they won't. I actually tested this - spawning two processes was about 8% slower than a single sed. – Michael Homer Jun 21 '14 at 06:43
  • @MichaelHomer I clocked real 0m0.006s vs. real 0m0.015s – Creek Jun 21 '14 at 06:49
  • 1
    @Creek and Homer - in a situation where it matters - as in the file is large enough to take longer to read than spawning the processes to read it, head | tail will win every time. But not for 91 linews they won't, as you say, Homer. http://unix.stackexchange.com/q/47407/52934 – mikeserv Jun 21 '14 at 10:43
  • how can I specify the end of file. Such as from line 10 to the end of file? – Reihan_amn Feb 27 '19 at 23:26
9

This should do

tail -n +10 file.txt | head -n 91 > newfile.txt

Creek
  • 5,062
7

If you were to do this in vim, it'd be pretty simple. Assume your file is named src and the file you wish to move the lines to is dest. If dest doesn't already exist, you would create it:

touch dest

Then, open both src and dest in vim (the -p flag opens the arguments in tabs):

vim -p src dest

Jump to the tenth line; select everything from the 10th line to the 100th line; yank; switch to the tab containing dest; paste.

10ggv101ygtp

Note: the 101 selects to the beginning of the 101st line (catches the \n at the end of line 100).


That is obviously a little more involved process than using a command-line tool, but does have the advantage of giving you a visual selection (so you can be sure you get everything you want). However, this also seems like a fine use-case for awk:

awk 'NR==10, NR==100' src > dest

The NR variable allows you to pattern-match against the number of lines. Thus, the above command, extracts lines 10–100 from src and then your shell redirects the output to dest.

HalosGhost
  • 4,790
3

You can do it many ways.

An awk solution:

$ awk 'NR<10{next};1;NR>100{exit}' file > new_file

A perl solution:

$ perl -nle '
    print if $. > 9 and $. < 101;
    exit if $. > 100;
' file > new_file
cuonglm
  • 153,898