23

I was thinking back to my introduction to programming recently and remembered writing a C++ program that deliberately read and wrote to memory addresses at random. I did this to see what would happen.

To my surprise, on my Windows 98 PC, my program would create some really weird side effects. Occasionally it would toggle OS settings, or create graphical glitches. More often than not it would do nothing or just crash the entire system.

I later learned this was because Windows 98 didn't restrict what a user process had access to. I could read and write to RAM used by other processes and even the OS.

It is my understanding that this changed with Windows NT (though I think it took a while to get right). Now Windows prevents you from poking around in RAM that doesn't belong to your process.

I vaguely remember running my program on a Linux system later on and not getting nearly as many entertaining results. If I understand correctly this is, at least in part, due to the separation of User and Kernel space.

So, my question is:

Was there a time when Linux did not separate User and Kernel space? In other words, was there a time when my rogue program could have caused similar havoc to a Linux system?

MetaFight
  • 341
  • 8
    hm, 32 bit mode under Windows 9x most definitely had memory protection (you could flip a register that other OSes would protect like their mother's porcelain collection, though, and switch to kernel mode). – Marcus Müller Mar 29 '22 at 19:30
  • 12
    Just a note that your timeline is a bit mixed around windows. For a long time there were two breads of windows: those derived from DOS (including windows 95, 98, ME) and windows NT. NT (released 1993) stood for "new technology" which referred to the New feature of CPUs to enable process isolation. Windows 2000 was the first attempt to "merge" the two by dropping the DOS but microsoft chickened out at the last minute and released ME. It was not until XP that dos based windows was dropped. – Philip Couling Mar 29 '22 at 19:45
  • @MarcusMüller there is a good chance I was building 16-bit binaries at the time. I was using a copy of TurboC++ that ran in DOS. I'm not even sure where I got a hold of it. – MetaFight Mar 30 '22 at 00:59
  • @PhilipCouling Sorry, I didn't mean to imply that NT came after Windows 98. I'm aware of the parallel lives of the two OSes. I didn't mention that explicitly because I felt my question was already a bit too rambly :/ It is relevant context, though. Thanks for providing it. – MetaFight Mar 30 '22 at 01:00
  • FWIW the formal name of this feature is Virtual Memory (though there is another type of virtual memory that does not offer memory protection that was used on Mac OS Classic before OSX but that is not what the term virtual memory usually mean) – slebetman Mar 30 '22 at 03:51
  • @StephenKitt What I was saying is that the formal name for this kind of memory protection IS "Virtual Memory" (though/even_if there is another type of virtual memory that does not offer protection). The usual kind of virtual memory and separation of address space is inseparable. If the name of "memory protection" is "virtual memory" of course virtual memory offer protection – slebetman Mar 30 '22 at 06:20
  • 2
    @slebetman sorry, I mistyped — I meant to write “you can have protection without virtual memory, see the 80286”. Virtual memory (as in, separate virtual and physical address spaces) is commonly used to provide protection, but it’s not required, and on its own it’s insufficient to protect the kernel from userspace (you also need some notion of privilege, e.g. x86 rings). – Stephen Kitt Mar 30 '22 at 06:30
  • 4
    IIRC Windows 98 did protect programs from each other, but for MS-DOS compatibility, all of them had access to the MS-DOS memory area, and for speed, all of them had access to the kernel memory area. So it wasn't really a security feature. – user253751 Mar 30 '22 at 10:23
  • 4
    @PhilipCouling Your "two breads of windows" got me wondering about metaphors for giving rise to other OSes or something. Took me a moment to realize yeast isn't involved and it should've been breeds. Gave me a good laugh and made me feel a bit silly – Indigenuity Mar 30 '22 at 14:47
  • 2
    @Indigenuity mobile autocorrect is good for a laugh – Philip Couling Mar 30 '22 at 15:19
  • 3
    @MarcusMüller: It didn't have very good memory protection; I also remember C programs crashing the whole machine fairly easily if you got anything wrong with pointers. (Cygwin or MinGW on Win98 at a summer, job writing a DLL in C to use from Excel; I got annoyed with Windows and installed a dual-boot Linux setup so I could use more familiar tools and not have to deal with crashes while I got the sine-wave fitting part right, and only polish off the API interface for Excel under Windows. And BTW, the Windows crashes were when running a stand-alone C program, not just calling into it from Excel) – Peter Cordes Mar 31 '22 at 09:26
  • @PeterCordes uff! glad you escaped that hell, a bit :) – Marcus Müller Mar 31 '22 at 09:39
  • 1
    @MarcusMüller: That was in 1999, and IIRC it was my choice to offer to try using C to try to get a speedup at all, not something I was forced into; I was hired primarily as a physics student. (Turned out about a 60x speedup over the original VB macros, maybe more.) But yeah, unfortunately their IT people were pretty draconian about not wanting people to install unauthorized stuff (but this was 1999 so AFAIK none of them ever noticed; intrusive monitoring not being a thing). But my manager got on my case a bit about it. Not exactly hell, but yeah, Win9x is a joke / toy. – Peter Cordes Mar 31 '22 at 09:51
  • @Indigenuity Uh, I interpreted "two breads" to be "two threads", to keep it in the computing realm of metaphors. – Glen Yates Mar 31 '22 at 22:31

3 Answers3

31

Linux has always protected the kernel by preventing user space from directly accessing the memory it uses; it has also always protected processes from directly accessing each others’ memory. Programs can only access memory through a virtual address space which gives access to memory mapped for them by the kernel; access outside allocated memory results in a segmentation fault. (Programs can access the kernel through system calls and drivers, including the infamous /dev/mem and /dev/kmem; they can also share memory with each other.)

Is the MMU inside of Unix/Linux kernel? or just in a hardware device with its own memory? explains how the kernel/user separation is taken care of in Linux nowadays (early releases of Linux handled this differently; see Linux Memory Management Overview and 80386 Memory Management for details).

Some Linux-related projects remove this separation; for example the Embeddable Linux Kernel Subset is a subset of Linux compatible with the 8086 CPU, and as a result it doesn’t provide hardware-enforced protection. µClinux provides support for embedded systems with no memory management unit, and its core “ingredients” are now part of the mainline kernel, but such configurations aren’t possible on “PC” architectures.

Stephen Kitt
  • 434,908
  • Couldn't one say that Linus wrote a modern *ix like operating system for a modern desktop CPU? I think he was specifically motivated by the 80386 and the memory management facilities that it provided. – Peter - Reinstate Monica Apr 01 '22 at 21:06
  • @Peter one could, and that’s what Jörg’s answer says. That could also have been true with any number of “modern” CPUs even in the early 80s ;-). – Stephen Kitt Apr 02 '22 at 08:52
17

Was there a time when Linux did not separate User and Kernel space?

That depends on how you define the terms "Linux", "user space", and "kernel space".

Remember how Linus Torvalds originally created Linux. Linus saved some money to buy himself an (at the time state-of-the-art) PC with an Intel 80386 CPU. He wanted to understand how the 80386 works, and he thought the best way to do that would be to write some low-level hardware code in 80386 assembly. At the same time, he was also dissatisfied with the performance of the terminal emulator running under Minix he was using to log into the university.

So, he decided to write a terminal emulator in 80386 assembly which you could boot directly. For this, he needed to write a bootloader, a keyboard driver, a (character) display driver, a serial driver, and a driver for whatever protocol he used to connect to the university.

Soon, he found that he also wanted to download files from the university, so he had to implement some file transfer protocol (probably ZMODEM, but maybe he chose XMODEM for simplicity) and also a hard disk driver, partition table parser, and a filesystem driver for the Minix filesystem. Because he also wanted to continue working while the terminal emulator was performing some long-running operation (such as a file download), he implemented multithreading.

This is the point where he realized that he had already implemented some significant portions of an Operating System, and so he thought it would be another fun project to turn the terminal emulator into one.

At some point after that, he accidentally mistyped a command and overwrote his Minix partition with a backup. Now he had a choice to make: reinstall Minix or finish his Operating System and use that instead.

When he came to a point where his Operating System was capable of running simple programs, he decided to upload it to the university's FTP server, and named it Freax (he thought that naming something after himself was pretentious and arrogant). The sysadmin of the FTP server didn't like the name, though, and decided to rename the file to Linux, which he thought sounded better.

Another while later, Linus made the very first public mention of Linux in his famous message where he stated that Linux was so tied to the 80386 that it would never be portable to anything else and where he predicted that Linux would never be big and professional.

Now, the question is: at which point in this journey did "Linux" become "Linux" and at which point in this journey did Linux become a "kernel" so that talking about separation of user and kernel space even makes sense?

As I said at the beginning: it really depends on how you define those terms.

In other words, was there a time when my rogue program could have caused similar havoc to a Linux system?

There was certainly a time in its evolution where the piece of software which later became an OS and which was later called "Linux" had no protection, had complex enough services that you might get away with calling it a "kernel" and had independent enough subsystems that you might get away with calling those subsystems "programs". For example, in a "real" Unix system, the terminal emulator and the file transfer would typically be two separate programs and the thing that coordinates the two and which houses the code for accessing the serial port, the hard disk, the screen, the keyboard, and the filesystem would be the kernel.

But was this piece of software "Linux"? I will leave that to you to decide.

  • 4
    That’s an interesting take! I would add that the first release of Linux (0.01) separated kernel space from user space, albeit using 386 LDTs and TSSs rather than paging. – Stephen Kitt Mar 31 '22 at 04:02
  • 8
    This answer is mostly repeating the Linux story, ignoring the fact that the very first code already used the 386's 32-bit protected mode while Windows 98 still worked on top of 16-bit real mode MS-DOS. That early Linux had no protection is just nonsense. Linux 0.99.X definitely had the basic UNIX protection. – U. Windl Mar 31 '22 at 13:31
  • @U.Windl Windows 98 ran in 386 protected more, it had a preemptively multitasking kernel with per-process virtual address spaces, and the filesystem and device drivers were 32-bit VxDs. It booted on top of DOS for compatibility with old drivers that didn't have VxD equivalents, but they were uncommon at that point I think. It wasn't a great OS, but it was much more of one than you give it credit for. – benrg Oct 07 '22 at 17:25
  • @benrg The truth is that Windows 98 was mostly a 16-bit operating system being executed in a some type of 32-bit kernel, started from a 16-bit operating system (MS-DOS). And you could use the old MS-DOS drivers. Linux never was a 16-bit operating system (once the kernel was loaded (BIOS could not run 32-bit programs at that time)). – U. Windl Oct 18 '22 at 09:55
5

Yes, Linux has always need an MMU to work for memory protection. Many people have ported it to small embedded systems without MMU but there won't be complete memory protection anymore, so a process can read/write pretty much everything

Stephen Kitt
  • 434,908
phuclv
  • 2,086