69

This is more idle curiosity than anything else. A friend of mine asked me 'which port range is it that only root can use under Linux?' I told him 0-1024 were restricted. Then he asked my why it was so and... I was at a loss. No idea whatsoever.

Is there a reason why these ports are restricted and 1025-65535 are not?

Most major network services (HTTP, FTP, SSH, Telnet, HTTPS, POP, SMTP, etc) are in this range so possible answers I thought of:

  • An untrusted user could run a program that listened on these ports for logon details.
  • An untrusted user could run an unauthorized server application.

Can anyone shed light here?

1 Answers1

66

Suppose you're exchanging data with a computer on a port <1024, and you know that computer is running some variant of unix. Then you know that the service running on that port is approved by the system administrator: it's running as root, or at least had to be started as root.

On the wide, wild world of the Internet, this doesn't matter. Most servers are administered by the same people as the services running on them; you wouldn't trust the roots more than the other users.

With multiuser machines, especially on a local network, this can matter. For example, in the days before civilian cryptography, a popular method of running shell commands on another machine was rsh (remote shell); you could use password authentication, or you could authenticate just by proving you were user X on machine A (with machine B knowing that X@A could log in as X@B with no password). How to prove that? The rsh client is setuid root, and uses a port number <1024, so the server knows that the client it's talking to is trustworthy and won't lie as to which user on A is invoking it. Similarly NFS was designed to be transparent with respect to users and permissions, so a common configuration was that on a local network every machine used the same user database, and user N at A mounting filesystems from server B would get the permissions of user N at B. Again, the fact that the NFS client is coming from a port number <1024 proves that root at A has vetted the NFS client, which is supposed to make sure that if it transmits a request purporting to be from user N then that request really is from user N.

Unauthorized users not being able to run servers on low ports is another benefit, but not the main one. Back in the days, spoofing was quite the novelty and users running spoof servers would be quickly quashed by vigilant administrators anyway.

  • 4
    So, sort of a poor-man's authentication, then? Does this convention have any real benefit in modern *nix-like operating systems? – Andrew Lambert Jul 13 '11 at 01:10
  • 2
    @Amazed: The unix world is conservative, so the question to ask is "Does it cause any real trouble?" (and it should be answered in the full knowledge that every sever worth running has a command line argument to change the port). – dmckee --- ex-moderator kitten Jul 13 '11 at 02:47
  • 8
    @dmckee it could also be argued that such a design leads to more servers running as root, even if they have the option of running on alternate ports. – Andrew Lambert Jul 13 '11 at 04:10
  • 5
    @Amazed It can still occasionally be useful today, on local networks. I don't think it leads to more servers running as root, services can bind the port then drop privileges, or use capabilities if available, or the admin can redirect a port on the firewall configuration. I don't think it would be put in if unix was designed today, but it doesn't hurt. – Gilles 'SO- stop being evil' Jul 13 '11 at 07:11
  • Certainly in enterprise scenarios it gives an excellent solution, and as @Gilles said, services bind the port then drop privs. – Rory Alsop Jul 13 '11 at 10:38
  • 6
    This nonsense should long be gone from the kernel. No port number should have any special meaning. The "reasoning" behind that design is long outdated (I'd think it was controversial even at design time). But what's worse then the idea of any special number ranges that are "trustworthy" are the implications. Webservers need to be executed as root just to serve webpages. A single exploit and the hole server is gone. And what for? For legacy design that never even slightly worked. – omni Sep 15 '18 at 00:37
  • @omni most people just run nginx on port 80 though, which either runs workers as www-data or forwards to another (non-root) application, so such exploits often have to be in nginx's root module itself, which are not exactly so common. On the other hand, most modern deployments just use iptables or tools using iptables like container networks. (Most modern port 80 deployments are just simple webpage servers that would be using cloud hosting anyway) (I just realized this answer was 12 years ago) – SOFe Aug 29 '23 at 06:42
  • That said, this argument only makes sense for application-level protocols like HTTP and HTTPS, but privileged ports are still reasonable for applications that are actually root only, tho probably better implemented as user-owned ports (something like chown root /sys/ports/22?) than a magic number 1024. – SOFe Aug 29 '23 at 06:49
  • @SOFe sure you run modern applications differently. People started building best practises around those stupid legacy decissions. That doesn't make those decissions less stupid thought. I don't get your argument. Someone shit's on the street and instead of pointing him to a toilet you claim it's not an issue as long as everyone walks around it. – omni Dec 24 '23 at 10:50
  • @SOFe And for the second part: what do you mean by "reasonable for applications that are actually root only"? What security benefit exactly do you gain by telling the kernel "only a root app can listen on this port?" – omni Dec 24 '23 at 10:53
  • it doesn't make sense that only root can listen on a port, but it makes sense to ensure that a user can exclusively own a port. – SOFe Dec 25 '23 at 11:21