6
  1. In Windows, there is a certain limit on the number of characters in a path, which restricts how deep a directory can be created. I was wondering what the case is like in Linux?
  2. Do you have some suggestions on how to organize directories to achieve the same or close enough benefits (such as good for organization) of deep directory structures, with less potential trouble one may run into?
manatwork
  • 31,277
Tim
  • 101,790

3 Answers3

21

The actual limits can depend both on the filesystem you're using, and the kernel.

To find out the limits for a particular mount point, you can use getconf (ex. for / on my machine):

$ getconf  PATH_MAX /
4096
$ getconf  NAME_MAX /
255

PATH_MAX is the maximum total length, NAME_MAX if for the filename. The kernel limits are in include/linux/limits.h in the kernel source:

#define NAME_MAX         255    /* # chars in a file name */
#define PATH_MAX        4096    /* # chars in a path name including nul */

For a list of filesystem limits, see Comparison of file systems.

The filesystem limits dictate the maximum nesting level (if any) for directories and then length of file and directory names for that filesystem. The kernel limits dictate how long strings that refer to paths can be.
You can actually have a nesting structure that exceeds the PATH_MAX limit. But you won't be able to refer to it with a fully-qualified path from root. You should also expect strange software bugs if you use such deep structures, since a lot of code expects paths to fit within PATH_MAX buffers, and checking for ENAMETOOLONG errors (and correctly recovering from them) is probably not one of the best-tested code paths out there.

As for organization, just use whatever feels more natural. Keep hierarchies reasonable, avoid strange characters (and whitespace) if you want to be script-safe/friendly. Those limits are quite generous. If you ever get near PATH_MAX, it's probably time to reorganize things.


If you do want to test out how things behave in very lengthy paths, here's a fast way to generate huge paths:

#! /usr/bin/perl

my $kd = "a" x 255;
for my $i (1..64) {
  mkdir($kd); chdir($kd);
}

If you want a deep hierarchy, try with:

#! /usr/bin/perl

my $kd = "a";
for my $i (1..8192) {
  mkdir($kd); chdir($kd);
}

And you can get:

$ pwd | wc -c
16394

But ksh gets a bit confused:

$ cd ..
ksh: cd: ..: [File name too long]

bash does do the cd .., but the prompt is messed up, the directory name is not resolved - so pwd | wc -c is actually 16397 after that.

Mat
  • 52,586
  • Really nice answer! I usually stop nesting my directories around 1024. Like too keep it clean and simple you know :-) –  Jan 13 '12 at 12:56
  • 3
    Note that while there is a path length limit, there is no limit to the depth of a directory or to the absolute path to that depth. You can create and access arbitrarily deep directories by cding into them and using relative paths. – Gilles 'SO- stop being evil' Jan 15 '12 at 03:02
  • @Gilles: Thanks! I was wondering you mean the relative path with respect to that directory by "the absolute path to that depth"? – Tim Jan 16 '12 at 01:34
  • 2
    @Tim You can have a directory whose absolute path is /000000001/00000000002/…/000004090 and create a subdirectory 000004100 there. The absolute path to the 4100 directory will be 4100 bytes long, so you won't be able to use it. But you'll be able to descent into the 4100 with successive cd commands and create more files and directories there, using only paths less than 4096 bytes long, by taking advantage of relative paths. – Gilles 'SO- stop being evil' Jan 16 '12 at 18:34
2

AFAIK is the limit on the length of the path defined in include/linux/limits.h and is 4096 characters (including the final Null-byte). The length of the filename is limited to 255 characters. However, the actual filesystem might enforce further restrictions.

antje-m
  • 1,583
2

I just tried a simple script to test this (on my ext4 drive):

dir=test
while mkdir $dir; do dir=$dir/test; done

The error it eventually gave was File name too long. I used find test | wc -l to get the depth, it said 819, and I'm 2 deep already, for a total of 821. Then I tried with a one-letter name:

dir=a
while mkdir $dir; do dir=$dir/a; done

Died with the same File name too long message at 2048 deep (2050 total).

Then I had the idea to cd into each directory I make so the command line doesn't grow:

while mkdir a; do cd a; done

That's still running, and seems to be going quite a bit slower than the other two, currently 2214.

I looked into it, and all I found was a statement (note 14 on the very bottom) that Linux has a limit on the pathname of 4096 bytes. This is quite a bit, and I don't think you need to worry too much about it. As for good organization, organize things in the way that makes the most sense to you.

Kevin
  • 40,767