1

I have a huge file and want to see its beginning and end. However, when I use head and tail, it apparently tries to read the whole file. Since the file is larger than a terabyte, this takes a very long time. Why would the tools do the simple task in such an inefficient way? Is there a reasonable workaround?

The command I'm running is simple tail filename

Edit: Since the question was marked as duplicate, I cannot post a proper answer.

The real reason why head and tail were reading the whole file was that the file contained no newline characters. There is a nice workaround by PM 2Ring - using the -c option and piping the output to hexdump.

JohnEye
  • 417
  • 1
    What is the exact command that you're running? – DisplayName Nov 19 '14 at 11:40
  • 3
    It doesn't, http://unix.stackexchange.com/a/102907/79979 – DisplayName Nov 19 '14 at 11:44
  • I would be inclined to believe that if the command wasn't running for two hours already. – JohnEye Nov 19 '14 at 11:53
  • 2
    Is it possible that that file contains very few newline characters (very few lines)? Is it a regular file? What file system is it on? – Stéphane Chazelas Nov 19 '14 at 12:11
  • Oh, you are absolutely right! It's just a test file full of zeroes. Post your comment as an answer and I'll accept it :-) – JohnEye Nov 19 '14 at 12:17
  • 1
    So you can still look at your binary file efficiently (or newline-free text file) using head or tail by supplying a byte count with the -c option. Of course, you may want to pipe the output through hexdump or similar. :) – PM 2Ring Nov 19 '14 at 13:22

0 Answers0