92

Can anyone explain why Linux is designed as a single directory tree?

Whereas in Windows we can have multiple drives like C:\, and D:\, there is a single root in Unix. Any specific reason there?

user2720323
  • 3,589
  • 9
    Umm, what? How is Linux a single directory? – terdon Oct 07 '13 at 16:58
  • 14
    @terdon - I think he is asking about having a single root directory (/) vs. DOS-style (C:\ D:\). – jordanm Oct 07 '13 at 16:59
  • In so many aticles I read like Linux is a single directory structure . But in windows we can have multiple drives . My question is any security reason there in designing Linux as a single directory ? – user2720323 Oct 07 '13 at 17:01
  • @terdon : Exactly – user2720323 Oct 07 '13 at 17:01
  • 28
    You can (and usually do) have multiple drives in Linux as well. In fact, the basic principle is the same, C: and D: are mount points in Windows as well. The Windows equivalent of / is My Computer, everything is mounted under that. – terdon Oct 07 '13 at 17:02
  • 1
    @terdon : '/' equal to 'My Computer' in windows fine . Then why we are 'mounting' external drives (CD-ROMs) , we can directly access that structure right ? – user2720323 Oct 07 '13 at 17:06
  • 62
    I think a more relevant question would be "why would an operating system NOT have a single root"? (The answer for DOS/Windows would be design error / failure to plan for the future / unnecessary assumption) – JoelFan Oct 07 '13 at 18:27
  • 9
    I've always wondered why Windows chose that weird naming scheme. It seems quite logical to have all in a single file-system tree, with other drives mounted as subdirectories, instead of having a forest. "Simple is better than complex" and a forest is more complex than a single tree. – Bakuriu Oct 07 '13 at 20:11
  • 21
    Windows chose that bizarre system because of MS-DOS before it, and MS-DOS followed the early precedent set by CP/M. MS-DOS was a floppy drive based system (A: and B: initially, sometimes for instance, on a single drive system, A: and B: were the same drive, but two different logical disks for purposes of swap/copy operation). Like most people damaged by MS-DOS PCs, the OP thinks that / on Linux is the same as C: in MSDOS/Windows, when it is not really the same thing. – Warren P Oct 07 '13 at 20:27
  • 8
    What is strange is that DOS stole so much from Unix and they still got that wrong – JoelFan Oct 07 '13 at 21:49
  • 25
    Actually, C:, D: and stuff is just compatibility with DOS and Win32; Windows NT internally has a somewhat UNIX-like object hierarchy, were the drive letters (and in general Win32 stuff) are just symbolic links to the "real" objects (c:\file.txt is actually \??\c:\file.txt, with \??\c: being a symlink to e.g. \device\harddisk0\partition1). See e.g. here – Matteo Italia Oct 08 '13 at 02:03
  • 6
    CP/M and DOS < 2.0 didn't have directories, so it's not like they had multiple trees at the beginning. – ninjalj Oct 08 '13 at 08:32
  • 11
    The reason Windows preserved drive letters at all is because of backwards compatibility, they haven't been necessary for a very long time. So spare me the tedious anti-MS sniping on that one. – Alan B Oct 08 '13 at 09:22
  • 5
    BTW, you could actually mount your storage devices under the single filesystem tree as /D:, /E:, /F:... That's perfectly valid in Linux. But the fact is that you'd never want to, given this huge flexibility. Untying the FS dependency on hardware provides great advantages and convenience. – ulidtko Oct 08 '13 at 10:16
  • 4
    @ulidtko: Fun fact, you actually could mount drives under C:\mnt\ on windows, the functionality is there, and indeed necessary for >24 disparate drives/partitions! Why the functionality has almost no friendly GUI support, I don't know. – Phoshi Oct 08 '13 at 17:35
  • 3
    @ninjalj CP/M 2.2 and later did have directories, kind of. You had 16 directories (or user areas) named 0-15, and the command USER change between them. Usually 0 was common to all USER areas for searching programs. But you could only use one at a time. Ah, those were the times. :-) – Anders Oct 08 '13 at 19:17
  • @JoelFan when you say "DOS stole so much from Unix" I hope you meant "stole" in a nice way. – Adrian Ratnapala Oct 09 '13 at 10:51
  • 2
    @Phoshi: actually, there's some GUI for that - just go in disk management, right click on a volume, there's a voice like "change drive letter and mount point" and you can mount drives on any NTFS empty folder. – Matteo Italia Oct 10 '13 at 00:38
  • Windows has a single root, too. You just don't have access to it because child names of the root are the device letters. C:\FOO\BAR vs /c/foo/bar ... Unix gives your the option to use longer names at the top level where Windows limits you to one letter and requires the : to be used. – Skaperen Oct 11 '13 at 03:08

12 Answers12

194

Since the Unix file system predates Windows by many years, one may re-phrase the question to "why does Windows use a separate designator for each device?".

A hierarchical filesystem has the advantage that any file or directory can be found as a child of the root directory. If you need to move data to a new device or a network device, the location in the file system can stay the same and the application will not see the difference.

Suppose you have a system where the OS is static and there is an application that has high I/O requirements. You can mount /usr read-only and put /opt (if the app lives there) onto SSD drives. The filesystem hierarchy doesn't change. Under Windows this is much more difficult, particularly with applications that insist on living under C:\Program Files\

Autodidact
  • 103
  • 3
doneal24
  • 5,059
  • 29
    And that (rhetorical) question does have an answer: tradition. Just a different tradition than Unix came from. Windows gets this from DOS, which gets it from CP/M-80, which followed a common pattern of many minicomputer and mainframe operating systems. The drive names just got shortened from DISK0: or SY: to A:. – RBerteig Oct 07 '13 at 19:00
  • 6
    @RBerteig - maybe tradition, especially in the Windows case, but Rob Pike presents a fairly convincing argument for Unix-style naming schemes in The Hideous Name, http://pdos.csail.mit.edu/~rsc/pike85hideous.pdf –  Oct 07 '13 at 19:15
  • 13
    Since Windows NT, I believe, it has been possible to mount a device at a given virtual path in Windows to accomplish the exact same thing as Unix, although it's uncommon on home PCs (somewhat commoner on servers and business deployments). You may choose to view this as a vindication of the Unix Way(tm), if you'd like. – JSBձոգչ Oct 07 '13 at 20:15
  • 10
    @BruceEdiger I'm not going to try to argue that DOS was right. Just pointing out that there is context for why Windows is the way it is, and that it wasn't just something that MS pulled out of a hat. – RBerteig Oct 07 '13 at 21:33
  • 1
    @BruceEdiger: Wow. Nice paper. It is also one of the few times I've seen Pike be unquestionably wrong about something. (Namely that the ARPANET name serving system cannot scale. These days we call it DNS, and it has scaled quite well. The core concepts of an absolute hieracical space with authority and delegation remains completly unchanged). Admittedly this is because non-IP networks that are relevant to mail have died out. – Kevin Cathcart Oct 09 '13 at 18:25
  • Mounting devices as a child in the filesystem has been available in Windows for many years. It is perfectly possible to mount a device under C:\Program Files\ and this will be transparent to any program. – Martin Argerami Dec 17 '13 at 05:25
  • @MartinArgerami You can mount devices into the file system without a problem but you still have to address network file systems differently. You cannot mount an SMB share under C:\Program Files. – doneal24 Dec 18 '13 at 15:58
88

This is partly for historical reasons, and partly because it makes more sense this way.

Multics

Multics was the first operating system to introduce the hierarchical file system as we know it today, with directories that can contain directories. Citing “A General-Purpose File System For Secondary Storage” by R.C. Daley and P.G. Neumann:

Section 2 of the paper presents the hierarchical structure of files, which permits flexible use of the system. This structure contains sufficient capabilities to assure versatility. (…)

For ease of understanding, the file structure may be thought of as a tree of files, some of which are directories. That is, with one exception, each file (e.g., each directory) finds itself directly pointed to by exactly one branch in exactly one directory. The exception is the root directory, or root, at the root of the tree. Although it is not explicitly pointed to from any directory, the root is implicitly pointed to by a fictitious branch which is known to the file system. (…)

At any one time, a user is considered to be operating in some one directory, called his working directory. He may access a file effectively pointed to by an entry in his working directory simply by specifying the entry name. More than one user may have the same working directory at one time.

As in many other aspects, Multics sought flexibility. Users can work in a subtree of the filesystem and ignore the rest, and still benefit from directories to organize their files. Directories were also used for access control — the READ attribute allowed users to list the files in a directory, and the EXECUTE attribute allowed users to access files in that directory (this, like many other features, lived on in unix).

Multics also followed the principle of having a single storage pool. The paper does not dwell on this aspect. A single storage pool was a good match with the hardware of the time: there were no removable storage devices, at least none that users would care about. Multics did have a separate backup storage pool, but this was transparent to users.

Unix

Unix took a lot of inspiration from Multics, but aimed at simplicity whereas Multics aimed at flexibility.

A single hierarchical filesystem suited Unix well. Like with Multics, storage pools were usually not relevant to users. However, there were removable devices, and Unix did expose them to users, via the mount and umount commands (reserved to the “super-user”, i.e. the administrator). In “The UNIX Time-Sharing System”, Dennis Ritchie and Ken Thompson explain:

Although the root of the file system is always stored on the same device, it is not necessary that the entire file system hierarchy reside on this device. There is a mount system request with two arguments: the name of an existing ordinary file, and the name of a special file whose associated storage volume (e.g., a disk pack) should have the structure of an independent file system containing its own directory hierarchy. The effect of mount is to cause references to the heretofore ordinary file to refer instead to the root directory of the file system on the removable volume. In effect, mount replaces a leaf of the hierarchy tree (the ordinary file) by a whole new subtree (the hierarchy stored on the removable volume). After the mount, there is virtually no distinction between files on the removable volume and those in the permanent file system. In our installation, for example, the root directory resides on a small partition of one of our disk drives, while the other drive, which contains the user's files, is mounted by the system initialization sequence. A mountable file system is generated by writing on its corresponding special file. A utility program is available to create an empty file system, or one may simply copy an existing file system.

The hierarchical filesystem also has the advantage of concentrating the complexity of managing multiple storage devices into the kernel. This meant that the kernel was more complex, but all applications were simpler as a result. Since the kernel has to care about hardware devices but most applications don't, this is a more natural design.

Windows

Windows traces its ancestry back to two lineages: VMS, an operating system originally designed for the VAX minicomputer, and CP/M, an operating system designed for early Intel microcomputers.

VMS had a distributed hierarchical filesystem, Files-11. In Files-11, the full path to a file contains a node name, an account designation on that node, a device name, a directory tree path, a file name, a file type and a version number. VMS had a powerful logical name feature allowing shortcuts to be defined to specific directories, so users would rarely have to care about a directory's “real” location.

CP/M was designed for computers with 64kB of RAM and a floppy drive, so it went for simplicity. There were no directories, but a file reference could include a drive indication (A: or B:).

When MS-DOS 2.0 introduced directories, it did so with a syntax that was compatible with MS-DOS 1 which itself followed CP/M. So paths were rooted at a drive with a single-letter name. (Also, the slash character / was used in VMS and CP/M to start command line options, so a different character had to be used as a directory separator. This is why DOS and later Windows use backslash, though some internal components also support slash).

Windows retained compatibility with DOS and the VMS approach, so it kept the notion of drive letters even when they became less relevant. Today, under the hood, Windows uses UNC paths (originally developed by Microsoft and IBM for OS/2, of related ancestry). Although this is reserved for power users (probably due to the weight of history), Windows does allow mounting through reparse points.

  • 3
    Although it's not the default behaviour, with NTFS file systems Windows can also mount all your storage under a single root: http://technet.microsoft.com/en-us/library/cc753321.aspx http://www.howtogeek.com/98195/how-to-mount-a-hard-drive-as-a-folder-on-your-windows-pc/ http://serverfault.com/questions/24400/in-windows-how-to-mount-folder-as-a-drive – gerlos Oct 08 '13 at 16:54
  • 3
    It seems that the relevant part is that MS-DOS 1.0 was floppy-based. On such a system, (a) it was important to know which physical disk your files were on, and (b) A: and B: was a decent convention for distinguishing between your floppy drives if you had two of them. When hard drive support was added in MS-DOS 2.0, the drive C: designation allowed backwards-compatibility by treating the HD as one BIG floppy. – user1024 Oct 09 '13 at 05:03
  • 5
    Actually, initially CP/M was designed to run in 16, not 64, KB of RAM. The 64 KB figure is probably to allow applications some breathing room; while the command processor (CCP) was overwritten and reloaded if necessary, BIOS and BDOS were memory resident at all times. Yep, that's where BIOS comes from - IBM did not come up with the term! See Wikipedia CP/M: Hardware model and Components of the operating system. Keep in mind that 16 KB is only about three densly written pages (70 lines × 80 chararcters/line × 3 pages = 16800 bytes). – user Oct 09 '13 at 11:31
35

There are no security concerns behind having a single directory tree.

The guys who designed Unix had a bunch of experience with operating systems that required users to know what physical device contained a given resource. Since part of the purpose of an operating system is to create an abstract machine on top of real hardware, they thought it much simpler to dispense with addressing resources by their physical location and decided to put everything into a single tree of names.

This is only one part of the genius behind the design of Unix.

msw
  • 10,593
27

Note that the drive letter names from MS-DOS which persist into modern Windows are a red herring here. Drive letter names are not the best representation of a file system structure which has multiple roots. They are a strawman implementation of such a system.

A properly implemented filesystem that supports multiple roots would allow arbitrary naming for the volumes, like dvdrom:/path/to/file.avi. Such as system would get rid of the laughable user interface issues that plague Windows. For instance, if you plug in a device such as a camera, the Windows Explorer UI makes you believe that there is a device called Camera (or whatever), and that you have a path like Computer\Camera\DCIM\.... However, if you cut and paste the textual version of this path out of Explorer, it doesn't actually work because some of the pathname components are a user interface fiction, not known to the underlying OS. IN a properly implemented system with multiple roots, it would be fine: there would be a camera:\DCIM\... path that is recognized uniformly at every level in the system. Moreover, if you ported over an old hard drive from an OLD PC, you would not be stuck with some drive letter name like F:, but rather you would be able to name it whatever you want, like old-disk:.

So, if Unix did have multiple roots in the filesystem structure, it would be done sanely like this, and not like in MS-DOS and Windows with one-letter drive names. In other words, let us only compare the Unix scheme to a good multi-root design.

So, why doesn't Unix have a sane multiple-roots implementation, in favor of sane one-root implementation? It's probably just for simplicity. Mount points provide all the functionality of being able to access volumes via names. There is no need to extend the namespace with an additional prefix syntax.

Mathematically speaking, any disjoint tree graph ("forest") can be joined by adding a root node and making the disjoint pieces its children.

Moreover, it is more flexible that the volumes do not have to be at the root level. Since there is no special syntax which denotes volume (it is just a path component), mount points can be anywhere. If you bring in three old disks to your machine, you can have them as /old-disk/one, /old-disk/two, etc. You can organize disks however you want, the way you organize files and directories.

Applications can be written which depend on paths, and the validity of paths can be maintained when the storage devices are reconfigured. For instance, applications can use well-known paths like /var/log and /var/lib. It's up to you whether /var/log and /var/lib are on the same disk volume or on separate ones. You can migrate a system to a new storage topology, while preserving the paths.

Mount points are a good idea, which is why Windows has had them since around Windows 2000.

Volume mount points are robust against system changes that occur when devices are added or removed from a computer. Microsoft Technet

Kaz
  • 8,273
  • 6
    Perhaps coincidentally, your "good multi-root design" sounds a lot like the old AmigaDOS system, which allowed arbitrary volume names, including "assigned" volumes that referred to a specific directory inside another volume. You could even (with appropriate software) have "virtual" volumes like, say, an FTP: volume that allowed you to access files on any FTP server with a path like FTP:hostname/path/to/file. – Ilmari Karonen Oct 08 '13 at 01:28
  • 3
    This really isn't a good answer as it seems to be extremely subjective. Its pretty overtly Windows bashing. – Rig Oct 08 '13 at 17:37
  • 3
    @Rig Though that may be true, Windows deserves a thorough bashing for still having these drive letter names, dating back to MS-DOS. This is the multi-root filesystem that most users are familiar with, yet we cannot really use it for the purposes of comparison with single-root, because it is a strawman example of such a system. – Kaz Oct 09 '13 at 01:56
  • In other words, I'd rather be fair to Windows, than unfair to multi-root filing schemes. :) – Kaz Oct 09 '13 at 02:18
  • 3
    @Kaz I still find this answer to be more of a rant. Windows does the file system different but it doesn't make it wrong, awful, or a crime against humanity. You don't like it as you are entitled to. Microsoft didn't even come up with this scheme, they borrowed it from a popular system of the day, but they have to maintain it for a reasonable maintainability with legacy code. – Rig Oct 09 '13 at 03:22
  • 1
    @Rig Sure; it's not any more awful than, say, obtaining your next dinner by means of a flint arrowhead. Flint arrowheads were actually state of the art in their heyday. Ah, but oops, we cannot actually say that about DOS and drive letters, can we ... so much for analogies. – Kaz Oct 09 '13 at 03:36
  • 1
    @Kaz even though a special root-name syntax is never strictly necessary, they can be very useful in practice. I think URIs are the best example. – Adrian Ratnapala Oct 09 '13 at 11:04
  • @AdrianRatnapala The protocol identifier in a URI isn't simply a name. We cannot mount one URI into the space of another to create http://foo.bar.x/path/to/http://other/z. With disk volumes, it is reasonable to want that: to want foo-volume:/a/b/c to appear somewhere else in the tree. When that happens, it has to lose the prefix, or have it converted to a path component. – Kaz Oct 13 '16 at 14:34
  • I think we might be agreeing with each other: as I see it URI's are a good example of where the special root name (scheme) is helpful, for exactly the reason you mention. – Adrian Ratnapala Oct 13 '16 at 14:40
  • @AdrianRatnapala The PROTO://DOMAIN/PATH scheme reveals the implementation. In a filesystem, we hide the implementation;it is not revealed in the naming scheme /alpha/beta/gamma/ could be such that alpha is found on an Ext4 FS on a SSD drive. beta is an NFS mountpoint, and gamma is on a CD-ROM on the remote machine. All these filesystems and protocols are abstracted away from us. When programs want to access URI's, they have to use clumsy third-party libraries like libcurl, instead of just doing standard file I/O. – Kaz Oct 13 '16 at 14:47
12

Both *nix and Windows mount their drives. In Windows these are automatically mounted in mount points that, by default, are in ascending alphabetical order. These defaults are:

  • A: and B: => floppies
  • C: => first partition of first hard drive
  • D: => next partition or next hard drive or CD/DVD drive if no other partitions are present.

Each of these mount points is a directory.

In *nix, the mount points are decided by the user. For example, I have one partition mounted as /, and another as /home. So, /home is a separate drive, it would be the equivalent of say E: on Windows.

In both cases, Windows and *nix, mount points are separate directories. The only difference is that in *nix, these separate directories are sub-directories of /, of C: while in Windows, every mount point is mounted directly under the /, under My Computer let's say.

From the user's perspective, the main advantage is that the mounts are completely transparent. I don't need to know that the directory /home is actually on a separate partition. I can just use it as a normal directory. Instead, in DOS, I would have to explicitly call it by the mount point's name, say E:\home

External drives are mounted in pretty much the same way in both systems. Say D: for Windows and /mnt/cdrom for Linux. Each of these is a directory, I don't really see the difference. When you put a CDROM into your drive under Windows, the disk is mounted to D: just like in Linux.

terdon
  • 242,166
  • 3
    Out of curiosity, do you know what would happen if someone wanted to create 27 drives on Windows? What would Windows call the 27th drive? :D – Joseph R. Oct 07 '13 at 18:12
  • @JosephR. hah, good question, I have absolutely no idea. It would probably turn the machine into alphabet soup. – terdon Oct 07 '13 at 18:15
  • 2
    Hahaha. Seems Windows is too dull to do even that. – Joseph R. Oct 07 '13 at 18:20
  • 3
    Minor nitpick: the drive letters in Windows default to ascending alphabetical order, but they can be and often are renamed. – RBerteig Oct 07 '13 at 19:02
  • @RBerteig fair enough, answer edited. – terdon Oct 08 '13 at 01:49
  • 3
    @terdon: he would just mount the drive inside a directory - exactly like you do in POSIX OSes. – Matteo Italia Oct 08 '13 at 02:08
  • 2
    @JosephR. I think the last comment was directed at you. – terdon Oct 08 '13 at 02:12
  • 3
    @JosephR: At some point -I'm not sure when, but probably NT- Windows gained the ability to mount drives in directories, much like Unix does. By default, nobody actually does this (which surprises me: with the frequency of nuke-and-pave reinstalls, I'd think something analogous to putting /home on a separate volume would have become popular by now). However, if you run out of drive letters/numbers/symbols, this is what you have to do if you want to add more drives to the system. – The Spooniest Oct 08 '13 at 16:11
  • @terdon: sorry, wrong "@"; thank you for forwarding. :) – Matteo Italia Oct 10 '13 at 00:39
10

I agree with the answers above, especially Doug O'Neal's answer, but I think that they all miss a little something, as does the explicit device mount points like MS-DOS "C:" or "A:".

Rob Pike wrote The Hideous Name about syntax of names, but Russ Cox boiled it down:

Name spaces... are most powerful when new semantics can be added without adding new syntax.

A single name space where devices can be arbitrarily mounted allows for really flexible operations. I regularly use /mnt/sdb1 and /mnt/cdrom to temporarily place a currently-unused disk or CD into the overall file system. It's not at all uncommon to have home directories on a NFS server, so that no matter what machine you log in to, you get the same $HOME everywhere.

It comes down to this: having special syntax for special things puts definite limits on what you can do. If you can only mount an unused disk or CD or DVD, or a network filesystem/"share" on "E:" or "W:" or whatever, you've got a lot less flexibility.

  • 1
    You don’t really need symlinks for that. You can directly mount a partition (or network storage or whatever) on any path, such as /usr/local – although it always baffles me when I find a network where /usr/local points to a network mount. Yes, they do exist and there is some reason to do so: /usr/local is one of the standard places for your admin to place stuff not from the OS distributor. – Christopher Creutzig Oct 07 '13 at 20:24
  • @Christopher Creutzig - agreed, symbolic link not necessary for my example, I just wanted to throw in another example of how a flexible naming schema can work for you. It's perhaps not the very best example. –  Oct 07 '13 at 20:34
  • Look at the stupid web, by the way. proto://specifichost.domain.tld/topleveldir/middle/specificdoc.html. – Kaz Oct 08 '13 at 02:28
  • 2
    @Christopher Creutzig - I have set up /usr/local as a network drive at a couple of places. "local" then means local to the site, not to the machine. – doneal24 Oct 08 '13 at 17:19
  • 3
    @Kaz, I think the proto:// business is a pragmatic necessity. Every piece of software cannot be expected to know about all the URI schemes out there. Thus it can be helpful to to know when the scheme ID ends, and the rest of the URI starts. – Adrian Ratnapala Oct 09 '13 at 11:09
  • @Kaz Cool URIs don't change. A file's location on a file system might. – user Oct 10 '13 at 11:39
6

This is silly. Windows also has a single hierarchy point. But it is hidden and non-standard. As most things windows.

In this case, it is the "My Computer" concept. That is the equivalent of root (/) in Unix. Remember that root is a concept that exists in the kernel. you like it or not. Just like windows treat the "My Computer". of course, you can mount a partition on root in unix, and that is what most people do. And lots of things will look at a specific path for things (e.g. /etc/) but you are not limited by it. By all means, mount your drives in /C:/. you are not forbidden to do that in unix.

C:\ is not a root in windows, it is the mount point of one partition. Which MUST be on the top level "My Computer". While in unix you can mount a partition under any other tree. So linux you can have C: mounted in / while you have D: mounted in /mnt/d/... or even also mounted in / but that is tricky and depends on how the two file systems behave when mounting on top of an already mounted path.

So you can get the exact same as you have with windows by "forcing" yourself to follow the same limitations windows randomly imposes on you.

/ (treat this as "My Computer")
/c/ (mount your first data partition here)
/d/ (mount your second data partition here)

Then you would have to pass in the mount options on the boot options. since you will not have a /etc/... but that's also simulating the limitations windows imposes, as that what it does.

gcb
  • 398
  • 4
    Windows does have a single hierarchy under the hood, and it even has mount points, but it doesn't use them by default. – Gilles 'SO- stop being evil' Oct 08 '13 at 01:01
  • 1
    @Gilles not sure i understand. How attaching every driver to the "My Computer" root node is not using it by default? – gcb Oct 08 '13 at 06:20
  • 5
    That's the GUI presentation only. File paths don't use My Computer. – Gilles 'SO- stop being evil' Oct 08 '13 at 08:46
  • 3
    My Computer is the root node of the Shell hierarchy. it contains drives, if they have a drive letter, but also the control panel and any connected Windows Phone. The Shell hierarchy uses PIDL's instead of paths. – MSalters Oct 08 '13 at 10:56
  • @MSalters nails it. also, / can also have files, directories, mount points, symlinks, devices nodes... – gcb Oct 08 '13 at 21:32
  • 4
    @gcb: the point is that the shell hierarchy is not directly usable in "normal" applications. You cannot call CreateFile passing "My Computer" or other shell folders; it's an abstraction understood only by shell-related code, all the kernel calls (and thus 90% of applications, since file management in most languages is implemented in terms of kernel file APIs) know nothing about this stuff. The shell folders are usable only when the programs use the "standard dialogs" (which do understeand the shell namespace) and only when selected files directly map to a "real" (=kernel-understood) path. – Matteo Italia Oct 10 '13 at 00:49
4

The reason for Windows having drive letters probably goes back further than Microsoft and DOS. Assigning letters to removable drives was common on IBM systems, so Microsoft may have just been acting on IBM's instructions by copying CP/M. And initially, DOS didn't have directories anyway.

When MS-DOS ran on computers with one or two removable disks and no fixed media, you didn't really need a file system with directories. With one, or maybe two, 180 kilobyte disks, you never had enough files to have much trouble organizing them.

https://en.wikipedia.org/wiki/Drive_letter_assignment

4

Actually, Linux is based on Unix (or is Unix, see discussion) and Unix comes from mainframe environment, where using multiple devices was quite obvious. Mounting devices within single directory tree gives you maximal flexibility, and doesn't limit the number of devices the operating system can access.

On the other side, DOS letters for drives are a good design for a PC with 1 or 2 floppy stations and single disk drive. Big floppy 5,25' is always A:, little one 3,5' is always B:, and disk drive is always C:. You always know if you copy a file to the floppy or somewhere to the disk. You don't need any flexibility if you can't physically connect more than 2 floppy drives and 2 (or 4) hard disks.

DOS design was more end-user-friendly, while Unix design is administrator-friendly. Now drive letters are a burden for Windows, users more rely on automatically opening explorer window with removable drive content than knowing its letter... Ubuntu does the same, actually.

1

That's not true, actually. Windows uses another paths scheme (well, not the same)

"Unit Letters" are only something to remember easily paths, disks and partitions.

ARC paths define the path of a file in windows (but they are visible to the user only on boot):

http://support.microsoft.com/kb/102873

https://serverfault.com/questions/5910/how-do-i-determine-the-arc-path-for-a-particular-drive-letter-in-windows

In Windows NT there's no relation between disks, partitions and unit letters: You can "put" an entire volume in a folder (e.g.: c:\myseconddisk could be an entire physical disk!)

AndreaCi
  • 111
  • 1
    ARC paths are only for booting, for compatibility with some ROM when NT was ported to Alpha and MIPS. When the system is running it uses UNC paths. – ninjalj Oct 08 '13 at 09:07
0

Two things I would like to point out -

  1. Hard drives in linux are actually in a way assigned letters/names, like /dev/sdb1. But they can be mounted anywhere to be reached from the single / root structure
  2. The most common reason that people (including myself in the past) had separate drives in Windows was to have somewhere to keep documents, music, programs, etc. so that when Windows inevitably needed to be reinstalled or replaced, be it upgrade or virus or file system failure, there was still access to those files. I don't have this problem in linux - the file system is much more reliable, the OS doesn't break unless by some direct action or mistake on my part (ooh! a bleeding edge repo, let's try that!), and upgrades are FAR simpler. And, in the rare case I've had to reinstall, since all the software was available through repos or ppa's that I added (and I could easily copy my home dir with a live disk), starting from scratch and getting back to where I was takes only a couple of hours versus days of searching for new installers and/or old cd keys when restoring my programs in Windows.
Drake Clarris
  • 886
  • 4
  • 7
  • 2
    You're combining hard drives and file systems in your first point. If you mount /dev/sdb1 at some point in the filesystem you can access the files on the drive. If you open /dev/sdb1 directly you get to see the raw disk blocks. Generally not very useful, especially if you use encrypted file systems. – doneal24 Oct 08 '13 at 18:44
  • I was trying to relate it in a way that Windows users might understand. C: isn't a hard drive in Windows either, but everybody refers to it as that – Drake Clarris Oct 08 '13 at 18:57
  • 1.You can still keep your files without letters. 2.In POSIX systems a hard drive can be partitioned as a whole without a partition table, and a thumb drive can have a partition table.hell we even haz FS in a file instead of briefcase that god only knows who has come up with its name. – Behrooz Oct 10 '13 at 07:36
0

If you look back in history you can also see that Unix started at the time of 8 track tape systems for audio and 9 track IBM data systems (8 tracks/8 bits for data, one for parity). Technically very much the same.

At that time the information about the location of files was stored in parts of the information on tape and forward and backward defined when you read data from the tape (like a file, with startposition and end signature); it explains also you why you had not just one FAT at the start of the drive—you had multiple to speed up the lookup. And if you had multiple drives they where linked inside of /dev and via the address of the file you moved between the devices.

I believe you could have the view that it started simply earlier and that the decision behind MS Dos area (CP/M) and later Windows NT is simply related to VM mainframe drive letters instead of a single point of entry because at the time it looked more modern, data amounts of today didn't exist and they didn't think that you would eventually end up having not enough drive letters or that it would be over cluttered.

9-Track-Drive and Drive Letter assignment

Anthon
  • 79,293
Peter
  • 101