A large directory can be problematic for a number of reasons other than simply listing of files. For one thing, the time it takes to open a file in that directory will increase because the directory has to be read until the file is found. On many filesystems including ext*, directory entries not organized nor optimized for retrieval efficiency.
Answering your specific question, I think that you'll find using ls takes awhile due to the sorting involved. A solution is to run ls unsorted (if there is such an option in your distribution of ls). Specifically, I might ls unsorted to a file and the sort it. I then have the file to refer to without having to do another ls for a bit.
Another similar method is to use the find command and listout the directory contents into a file (which will be unsorted) and then go from there.
Thus my suggestion would be based on accessibility, effeciency and easily finding files, to use multiple sub-directories.
ls
, or displaying the contents in a file manager (which one?), or something else? On my system listing 7,000 files withls
on anext4
filesystem is pretty much instantaneous, as is displaying the directory in Thunar. – Stephen Kitt Mar 07 '15 at 20:56ls
to something that makes itlstat
every file. --color and -F do this. If you don't needls
to decorate the output, remove these aliases. Listing a 7000 file directory shouldn't be slow. – Mark Plotnick Mar 07 '15 at 23:48