184

Is there a way to limit the quantity of listed files on a ls command?

I've seen:

ls | head -4

but to get head or tail to be executed I need to wait for ls to finish execution, and with directories with an enourmous quantity of files that can take considerable time.

I wish to execute a ls command that limits without using that head command.

AndreDurao
  • 2,040
  • 2
  • 13
  • 8

8 Answers8

221

Have you tried

ls -U | head -4

This should skip the sorting, which is probably why ls is taking so long.

https://stackoverflow.com/questions/40193/quick-ls-command

niko
  • 2,319
  • 2
    "ls -U" still reads the entire directory before print... I think that I'll write a small script to limit the files, but that question link is good reading material. Thanks niko. – AndreDurao Sep 27 '11 at 16:43
  • 8
    @AndreDurao, GNU ls -U does not necessarily read the entire directory before printing. Try strace -e getdents,write ls -U > /dev/null in a large directory for instance. – Stéphane Chazelas May 14 '15 at 15:58
  • 2
    Just a note: if you want to execute strace on OSX look for dtrace, the strace command is a Linux utility – AndreDurao Apr 28 '17 at 14:28
21

If your version of ls has a way not to sort files, such as -U for GNU ls, use it. With no option, ls will first read all the files, then sort the names, then start printing.

Another possibility is to run find, which prints names as it finds them.

find . -name . -o -prune | head

(note that since head is working on lines, that assumes file names don't contain newline characters).

  • 1
    Note that in the case of GNU find (as opposed to busybox or heirloom find or GNU ls -U), it seems it reads the whole directory before starting to print. – Stéphane Chazelas May 14 '15 at 16:11
5

Perhaps you are in need of a tool other than ls?

For example, Randal Schwartz has a blog entry about using perl on large directories that may contain some hints on building something that meets your needs.

In the blog posting Randal explains that both ls and find attempt to read in all directory entries before printing any, while the perl solution he proposes does not.

AFresh1
  • 201
  • I also thought that could be the best option, because both ls and find read the entire directory before printing. I was planning to write a ruby script to do that instead of pearl, thanks for that AFresh1! – AndreDurao Sep 27 '11 at 16:37
  • 1
    @AndreDurao, perl probably uses readdir(3) like ls or find. readdir(3) on current versions of GNU/Linux at least does call the getdents() system call with a large count (that actually does generally optimise performances by reducing the number of system calls being made). But in your case, if you want fewer files, it looks like you'd have to bypass readdir and use BSDs getdirentries(3) or the getdents(2) system call instead. – Stéphane Chazelas May 14 '15 at 16:35
4

If performance is not the concern (as in the question that was closed as a duplicate of this one), and you want to list the first n files (as opposed to the first n lines of the output of ls) in the list of non-hidden files sorted by filename, with zsh, you can use:

ls -ld -- *([1,4])

to list the first 4 files. zsh will still read the full content of the directory though, even if you tell it not to sort with *(oN[1,4]) (note that ls also sorts that list).

3

On an unrelated note (because this question comes up when you google "linux tree limit number of files as output"), you can use tree --filelimit=10 (descend directories where only 10 or less files are present). It nicely gives you the output as follows

tree --filelimit=10

It lists directory structure and contents. Very helpful when exploring large datasets (when you are first interested in knowing the structure/hierarchy).

.
├── gps
│   ├── gps.csv
│   └── ins.csv
├── mono_rear [2217 entries exceeds filelimit, not opening dir]
├── mono_rear.timestamps
├── stereo
│   └── centre [3164 entries exceeds filelimit, not opening dir]
└── stereo.timestamps
1

Maybe less is better suited for your needs?

 ls /usr/bin | less

For me, it works instantaneously on a 5 years old laptop with classic HDD, but head is equally fast.

You can terminate less prematurely with q.

I guess your assumption about the source of the 1s delay is wrong, but maybe depends on your Unix-flavour or your shell, less or head command.

On Linux, with GNU-ls,

 ls -R /usr | less 

starts outputting immediately for me, while the whole output is running und running - so it is definitively not finished, before less starts. You might check, if you have a constant delay of 1s or maybe more, depending on the output or not.

I guess your 1s delay has a different reason, maybe the HDD is going to sleep and needs a wakeup?

Do you have such a delay for very few files too?

user unknown
  • 10,482
  • 1
    Thanks but I wasn't looking for something like that. Just like head the less is executed with the result of the entire ls result. I was looking for a way to ls itself to limit the results qty. – AndreDurao Jan 24 '12 at 11:40
  • @AndreDurao: Moved my comments into the answer. – user unknown Jan 25 '12 at 16:05
-1
ls -lrth | tail

ls -lrth | tail -n 10

ls -lrth | grep *.gz | tail
Anthon
  • 79,293
  • 2
    I'm Sorry Abhishek, but the point of this question is to avail ls command on bash. Piped commands like grep, head, tail or others are executed after the ls – AndreDurao Apr 04 '16 at 11:51
-1
$ cut -f 1,n filename

will do the task of fetching first n files. n is number of files you want to extract. so complete code will be:

$ ls|cut -f 1,n file
  • 1
    The ls | cut -f 1,n file command you suggested would output the first and nth field on each line of text in file, and would completely ignore the output of ls. This does not do what the original poster needs. – telcoM May 29 '18 at 07:45
  • 1
    Yes you are right .just a correction make it ls | head -n ...This will surely do the task. – Ramandeep Singh May 30 '18 at 08:48