10

In Windows, the system drive C: has a directory program_files, under which each program has its own directory.

In Linux, under /usr/ and /usr/local/, there are /bin, /etc, /share, /src, etc.

So in windows, all the files of each program are grouped in the same directory, while in Linux, files of the same type from all the programs are.

I feel the way that Windows organizes the installed programs is more logical than the way of Linux, and thus installed programs are more easy to manage manually.

What is the benefit of the way that Linux organizes the files of installed programs? Thanks.

I have this question when having the problem of How to organize installed programs in $HOME for shell to search for them when running them?, where I try to organize my programs in $HOME in the way of Windows, but have some problem of specifying the search paths for the programs.

Tim
  • 101,790
  • Great question. The unpredictable and scattered nature of the components of Linux programs drives me batty. I'm looking forward to reading the answers. – Syntax Junkie Mar 17 '18 at 17:47
  • 10
    @RandallStewart your "unpredictable and scattered" is our rational placement and non-duplication of DLLs. – RonJohn Mar 17 '18 at 18:20
  • 2
    This is historically based - mostly in order to be able to have several disks partitions mounted while being able to bootstrap having only the essentials needed. A similar situation was when PC BIOS'es could only see the first part of very large harddisks so a separate partition containing everything needed for booting was placed in that first part. Typically called "/boot". Today the need for sandboxes and untrusted apps have resulted in this being rethought - see e.g. the Ubuntu snaps. – Thorbjørn Ravn Andersen Mar 17 '18 at 18:40
  • 11
    Let me be the first to point out that Windows currently doesn't put all the files of each program into the smae directory. In addition to "Program Files" and "Program Files (x86)" there are "Users<username>\Appdata" directories of "Local", "LocalLow" and "Roaming" among others. Nor has Windows always used a "Program Files" folder for programs or used this consistently through its history. *nix is much more consistent in it's handling of files over time. – YLearn Mar 17 '18 at 20:00
  • @RonJohn "non-duplication of DLLs" Windows has system directories for DLLs, placing them in the same folder as programs is just an alternative method (usually done when there are multiple versions of DLLs with the same name). Linux can't really escape this problem either, though it does do better about versioned naming. – JAB Mar 17 '18 at 20:05
  • @YLearn To clarify for others, the Program Data/AppData folders exist because, in modern versions of Windows, programs should not be altering the contents of Program Files normally. AppData subfolders are, roughly, the equivalent of userland dotfolders on *nix systems. – JAB Mar 17 '18 at 20:09
  • 5
    @JAB, agreed, and understand this. I was pointing out that the statement by the OP, namely "So in windows, all the files of each program are grouped in the same directory" is simply not true. I also didn't bring into the discussion the Windows registry, which can also be seen as storing configuration and other data about Windows programs and is not part of a "Program Files" directory. – YLearn Mar 17 '18 at 20:17
  • 1
    Not just "non-duplication" -- the larger advantages resolve around potential for automated enforcement of security policy. (Separating binaries, libraries and data means you can mount the data with the noexec flag; keep the binaries and libraries on filesystems with block-level cryptographic signatures; etc). – Charles Duffy Mar 17 '18 at 20:51
  • This isn't entirely true. It's generally programs that belong to the system that go in /usr. Other programs can go anywhere you care to put them. For instance, I have a number of larger programs, like Adobe Acrobat, that are installed in /opt and have a link in /usr/bin. Others like the CUDA compilers live there, and have their directories inserted in path when I want to use them. Still others live under my home directory. You may think the Windows organization is more logical (but what happens when you have 2600+ programs?), others think the *nix method is. – jamesqf Mar 18 '18 at 01:01
  • 2
    Linux organizes the files of installed programs? Since when? – hobbs Mar 18 '18 at 05:35
  • Note that there are some Linux distros that don't follow FHS like Gobo Linux with /Programs/ which may be more similar to Windows – phuclv Mar 18 '18 at 06:45
  • Linux (the kernel) does not insist on this. But for Debian, it is done because there is a lot of sharing e.g. libraries in /lib used by many programs in /bin. Also because of PATH (finding programs). However sometime the other way is better. Then we use /opt with one sub-directory per package. Or my using stow to manage /usr/local/… for you. – ctrl-alt-delor Mar 18 '18 at 17:47

4 Answers4

18

In Linux the different locations usually, when well maintained, mirror some logic. Eg.:

  • /bin contains the most basic tools (programs)
  • /sbin contains the most basic admin programs

Both of them contain the elementary commands used by booting and fundamental troubleshooting. And here you see the first difference. Some programs are not meant to be used by regular users.

Then take a look in /usr/bin. Here you should find a bigger choice of commands (programs), usually more than 1000 of them. They are standard tools, but not as essential as those in /bin and /sbin.

/usr/bin contains the commands, while the configuration files reside elsewhere. This both separates the functional entities (programs) and their config and other files, but in terms of user functionality, this comes handy, as having the commands not intermixed with anything else allows for the simple use of the PATH variable pointing to the executables. It also introduces clarity. Whatever is should be executable.

Take a look at my PATH,

$ echo "$PATH" | perl -F: -anlE'$,="\n"; say @F'
/home/tomas/bin
/usr/local/bin
/usr/bin
/bin
/usr/local/games
/usr/games

There are exactly six locations containing the commands I can call directly (ie. not by their paths, but by their executables' names).

  • /home/tomas/bin is my private directory in my home folder for my private executables.
  • /usr/local/bin I'll explain separately below.
  • /usr/bin is described above.
  • /bin is also described above.
  • /usr/local/games is a combination of /usr/local (to be explained below) and games
  • /usr/games are games. Not to be mixed with utility executables, they have their separate locations.

Now to /usr/local/bin. This one is somewhat slippery, and was already explained here: What is /usr/local/bin?. To understand it, you need to know that the folder /usr might be shared by many machines and mounted from a net location. The commands there are not needed at bootup, as noted before, unlike those in /bin, so the location can be mounted in later stages of the bootup process. It can also be mounted in a read-only fashion. /usr/local/bin, on the other hand, is for the locally installed programs, and needs to be writable. So while many network machines might share the general /usr directory, each one of them will have their own /usr/local mounted inside the common /usr.

Finally, take a look at the PATH of my root user:

# echo "$PATH" | perl -F: -anlE'$,="\n"; say @F'
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin

It contains these:

  • /usr/local/sbin, which contains the admin commands of the type /usr/local
  • /usr/local/bin, which are the same ones the regular user can use. Again, their type can be described as /usr/local.
  • /usr/sbin are the non-essential administration utilities.
  • /usr/bin are the non-essential administration and regular user utilities.
  • /sbin are the essential admin tools.
  • /bin are the admin and regular user essential tools.
  • Thanks. I am very interested in how you organize the programs to be installed in /home/tomasz/bin/, see https://unix.stackexchange.com/questions/431793/how-to-organize-installed-programs-in-home-for-shell-to-search-for-them-when-ru?noredirect=1&lq=1 – Tim Mar 17 '18 at 17:07
  • My understanding was that while the separation of /bin and /usr/bin was indeed to avoid issues with network-mounted /usr directories, the separation of /usr and /usr/local is for differentiating between vendor-supplied files (/usr) and files manually installed on the machine itself (/usr/local) and had nothing to do with the network-mounting issues. Is this accurate? – Keiji Mar 17 '18 at 20:50
  • 4
    @Keiji, vendor-vs-local-management is the most common (and conventional) use for that distinction today, but if you look at traditional UNIX installations, /usr-off-a-network (vs /usr/local being, well, local) is not unheard of. – Charles Duffy Mar 17 '18 at 20:52
  • this is all such a bad idea in 2018 – amara Mar 17 '18 at 23:55
7

In nowadays I think this is historical inheritance from the classic UNIX.

In first UNIX versions, programs wasn't so large as in nowadays. The programs often consisted from one executable file which use system libraries. So, nobody thought about programs which will be consisted of couple of own libraries. The main library was C library and every program knew about it location.

Also, UNIX environment were considered as finished product (for preparing documentation). Therefore path-s to all tools was fixed.

Some benefits from fixed path-s in HDD (Hard Disk Drive) days is present in nowadays. If FSH (File System Hierarchy) split on separate disk partitions and put partitions with binaries and libraries near the primary sectors of HDD, then time of program starting will be little faster.

  • 6
    There are compelling advantages that remain useful today. Keeping binaries in /usr/bin, static data files in /usr/share and files which can be modified in /var or /usr/var means that security measures can be used to enforce that nothing in share or var is executable (by using noexec flags for their mounts); that nothing outside var can be modified (on a system using dm_verity or similar measures to generate signed, read-only media); etc. – Charles Duffy Mar 17 '18 at 20:48
3

What you see as a modern Unix-like system is not really traditional.

Normally, there would be fairly minimal / and /usr hierarchies with just system utilities, and programs are then installed separately into a subdirectory of /usr/local, and then made available by creating symbolic links.

A very typical setup for GNU software was to compile and install with

./configure
make
make install prefix=/usr/local/DIR/program-1
cd /usr/local/DIR
stow program-1

The GNU stow utility creates symbolic links to make the software available on the standard path, without having to add any directories to the PATH variable (as Windows does, and cruft tends to accumulate there).

Modern Linux distributions however ship everything as ready-made packages, so programs have become part of the "system". Because the package manager takes care of installation, there is no need for symbolic links, and separating programs serves no useful purpose (but would slow down program startup as many small directories would have to be scanned).

If you want to install software into your home directory, I suggest you use GNU stow as well for that — this will allow you to keep your programs separate, which is sensible if you are not using a package manager.

My traditional setup for that is one directory ~/software/DIR that I install programs into, then using stow inside DIR to create ~/software/bin, ~/software/share etc. This means I only have to add ~/software/bin to the PATH variable to get all my installed software.

Use:

./configure --prefix=~/software
make
make install prefix=~/software/DIR/program-1
cd ~/software/DIR
stow program-1

to install if the program follows GNU conventions.

3

You appear to be talking about the style of dividing up individual files by purpose (/usr/bin for executables, /usr/lib for libraries) rather than by application package (C++ compiler in one dir, image editing programs in another). While in Unix systems much of the reason for this historical, there are also current-day forces that tend to make Unix-like systems lean towards this: package managers that manage most of the programs on a system.

On Windows, historically and still fairly much today, applications have been responsible for providing their own installer and, especially, uninstaller, and even now frequently don't register themselves with any central application list. In a situation like this it's generally better for an application to have its "own" directory for as many of its files as possible. This helps avoid conflicts with other applications, though this doesn't always work out (particularly in the case of DLLs).

Unix systems, on the other hand, have since the 90s generally each had a single accepted package manager and a group providing a large amount of commonly used software through this package manager. (Official package managers for various Unices include yum and apt for Linux systems, pkgsrc for NetBSD, and ports for FreeBSD. Often commercial Unix systems also end up with an unofficial but widely accepted package manager as well, such as brew for MacOS.)

These package managers have the advantage that they can and do track every file on system in the various subdirectories that they "own." Because a single group is assigning the name and location of every file here, they can all use a small set of directories shared amongst them. This offers various advantages, especially in the areas of sharing files between applications and keeping low the number of paths you need to search for libraries and executable files.

That said, there's a long tradition of "separate directory per application" installation in Unix as well, usually under the /opt directory.

cjs
  • 670