Gustavo Duarte

Brain Food for Hackers

Performance Is a Science

For 2,000 years scholars held that heavier objects fall faster than lighter ones, partly because Aristotle couldn’t be bothered to take 2 minutes to experiment. Hell, he even wrote that men have more teeth than women. Isn’t that crazy? And yet, people often rely on this kind of fact-free reasoning to arrive at conclusions about computer performance, among other things. Worse, they spend their IT budgets or sacrifice code clarity based on these flawed ideas. In reality computers are far too complex for anyone to handle performance problems by “reasoning” alone.

Galileo On Pisa (This story is not true by the way)

Think about a routine in a modern jitted language. Right off the bat you face hidden magic like type coercion, boxing, and unboxing. Even if you know the language intimately, unknowns are introduced as your code is optimized first by the compiler, then again by the JIT compiler. It is then fed to the CPU, where optimizations such as branch prediction, memory prefetching and caching have drastic performance implications. What’s worse, much of the above can and does change between different versions of compilers, runtimes, and processors. Your ability to predict what is going to happen is limited indeed.

To take another example, consider a user thinking of RAID-0 to boost performance. Whether there are any gains depends on a host of variables. What are the patterns of the I/O workload? Is it dominated by seeks and random operations, or is there a lot of streaming going on? Reads or writes? How does the kernel I/O scheduler play into it? How smart are the RAID controller and drivers? How will a journaling file system impact performance given the need for write barriers? What stripe sizes and file system block sizes will be used? There are way too many interdependent factors and interactions for speculative analysis. Even kernel developers are stumped by surprising and counterintuitive performance results.

Measurement is the only way to go. Without it, you’re in the speculation realm of performance tuning, the kingdom of fools and the deluded. But even measurement has its problems. Maybe you’re investigating a given algorithm by running it thousands of times in a row and timing the results. Is that really a valid test? By doing so you are measuring a special case where the caches are always hot. Do the conclusions hold in practice? Most importantly, do you know what percentage of time is spent in that algorithm in the normal use of the application? Is it even worth optimizing?

LHC - CMS Detector

Or say you’ve got a fancy new RAID-0 set up. You run some benchmark that writes large globs of data to the disk and see that your sustained write throughput is twice that of a single disk. Sounds great, too bad it has no bearing on most real-world workloads. The problem with the naive timing test and the benchmark is that they are synthetic measurements. They are scarcely better than speculation.

To tackle performance you must make accurate measurements of real-world workloads and obtain quantitative data. Thus we as developers must be proficient using performance measurement tools. For code this usually means profiling so you know exactly where time is being spent as your app runs. When dealing with complex applications, you may need to build instrumentation to collect enough data. Tools like Cachegrind can help paint a fuller picture of reality.

For website load times and networks you might use tools like WireShark and Fiddler, as Google did for GMail. In databases, use SQL profiling to figure out how much CPU, reading, and writing each query is consuming; these are more telling than the time a query takes to run since the query might be blocked or starved for resources, in which case elapsed time doesn’t mean much. Locks and who is blocking who are also crucial in a database. When looking at a whole system, use your OS tools to record things such as CPU usage, disk queue length, I/Os per second, I/O completion times, swapping activity, and memory usage.

In sum, do what it takes to obtain good data and rely on it. I’m big on empiricism overall, but in performance it is everything. Don’t trust hearsay, don’t assume that what held in version 1 is still true for version 2, question common wisdom and blog posts like this one. We all make comical mistakes, even Aristotle did. Naturally, it takes theory and analysis to decide what to measure, how to interpret it, and how to make progress. You need real-world measurement plus reasoning. Like science.

10 Comments

Home Row Computing

Update August, 2013: Stick Shift is a product that implements this on Windows.

This post teaches you how to set up your computer so that you can generate any keystroke or key combination without taking your hands off home row (the ‘asdf’ row of keys). You can then use the arrow keys, page up/down, shortcuts, and more while in regular typing position. It works across applications, allowing you to type and move about much faster. I call it ‘home row computing’ and it’s something I started doing after I learned touch typing.

I’ll explain the idea and give step-by-step instructions for Windows systems. The first step is to do what power users have done all along: reclaim the Caps Lock key. It is a fine piece of keyboard real estate gone completely to waste, like a landfill in the middle of Manhattan. We must also do some key remapping:

Keyboard Remapping

The idea is to change Caps Lock so that it can be combined with other accessible keys to produce all of the faraway keys shown above, plus frequent key combinations and anything else you might want. When I say “map to home row” I mean mapping to keys that can be reached while your hands remain on home row. Nearby letter keys are fine too. For example, Caps Lock+j could become the Up arrow and Caps Lock+e could become Alt+F4.

It is common to turn Caps Lock into Control, but we don’t actually want that because nearly all Control+X combinations are already taken by various programs, so we wouldn’t be able to do much remapping. Some Control+X combinations are very high yield (e.g., the copy/paste/cut combos) so we’ll remap those onto home row. I think the best substitute for Caps Lock is the “Apps Key”. Since no software uses it as a shortcut ingredient, we can hijack it without fear. Also, its normal role is to open a right-click context menu, so that’s a useful thing to have on home row as it helps you stay clear of the mouse.

So download SharpKeys, run it, and turn Caps Lock into the Application Key as shown below:

SharpKeys screenshot, remap Application Key onto Caps Lock

OK, then click “Write to registry”, log off and back on, and you’re in business. The second step is to download the outstanding AutoHotKey to run our remapping script. AutoHotKey is a must have for Windows power users, featured repeatedly on lifehacker. Since it is a full-on automation and scripting engine there are many ways to do things. However its native key remapping syntax (OldKey::RemappedKey) doesn’t work for us here since we want to map key combinations into single keys. But you don’t have to worry about that, you can just download my ready-made script for either the qwerty (normal) keyboard layout or the Dvorak (freak) layout. Put it anywhere in your file system, then create a shortcut to the script in the Programs > Startup folder so it runs when you log on to Windows. By inspecting the script you’ll be able to tweak it for your tastes. Here’s what it does out of the box:

Home Row Computing Cheatsheet

Thanks to AutoHotKey’s eliteness the layout works well in practice. The timing is not quirky at all and there are zero misfires. If you hold down Caps Lock, you can press the other keys repeatedly and they’ll be remapped. Let go of Caps Lock and they’re back to normal. The script also handles modifiers such as Alt and Shift being pressed along with the key combinations. It’s pretty transparent. If you want to actually toggle caps lock, then Windows Key + Caps Lock will do the trick.

It is a joy to have navigation keys on the home row. It makes browsing, programming and writing much smoother. You can fly through dialogs. Having Esc nearby is not bad either. All in all, I can’t imagine going back to a regular keyboard. Given AutoHotKey’s power you can write scripts to handle key combinations so there are many possibilities. I adapted to this thing pretty fast; the symmetries between page up/up arrow, home/left and so on helped a bit. Again, it’s trivial to pick your own bindings, take a look at other ideas for cursor movement and cook up your own scheme if you wish.

Some quick thoughts regarding text editors. When I first did this I was an Emacs user so having a convenient Control key was crucial, but I still think it’s better to turn Caps Lock into the Apps Key instead. Then you can pick and choose your macro bindings between Control and Caps Lock, and given AutoHotKey’s power you have a lot of leeway. I have since switched to Vim (apostasy!!) and never looked back. Vim users will recognize the beauty of having arrow keys on the home row, but will probably barf at the exact key choices I used since jklh is burned into their brains. Just edit the script. You’ll like navigating without switching modes, and Esc on the home row is great for when you do switch. In Visual Studio this works seamlessly without interfering with any of the native shortcuts.

I’ve tried this out on Windows XP, Server 2003, Vista, and Server 2008, both 32- and 64-bit flavors. No problems that I know of, but use it at your own risk. The script works over Putty so the keys are available on the command line and Vim for all of the Unix boxes I use. If you know of similar approaches in other OSes I’d love to hear it in the comments. Hope this is useful!

Update: Amjith pointed me to XKeymacs, a tool that implements the same idea but for emacs keybindings (you could do a similar thing via AutoHotKey, but this is convenient for sure). Also, you guys in the comments are all in favor of Vim-style HJKL bindings, so if anyone makes a script to do that, I’d be happy to host it.

Update 2: Simon Scarfe has posted Vim-like bindings for qwerty here. See the comments for details. Thanks!

Update 3: Paul has posted a Linux implementation of this in a comment below. Sweet.

34 Comments

What Your Computer Does While You Wait

This post takes a look at the speed - latency and throughput - of various subsystems in a modern commodity PC, an Intel Core 2 Duo at 3.0GHz. I hope to give a feel for the relative speed of each component and a cheatsheet for back-of-the-envelope performance calculations. I’ve tried to show real-world throughputs (the sources are posted as a comment) rather than theoretical maximums. Time units are nanoseconds (ns, 10-9 seconds), milliseconds (ms, 10-3 seconds), and seconds (s). Throughput units are in megabytes and gigabytes per second. Let’s start with CPU and memory, the north of the northbridge:

Latency and throughput in an Intel Core 2 Duo computer, North Side

The first thing that jumps out is how absurdly fast our processors are. Most simple instructions on the Core 2 take one clock cycle to execute, hence a third of a nanosecond at 3.0Ghz. For reference, light only travels ~4 inches (10 cm) in the time taken by a clock cycle. It’s worth keeping this in mind when you’re thinking of optimization – instructions are comically cheap to execute nowadays.

As the CPU works away, it must read from and write to system memory, which it accesses via the L1 and L2 caches. The caches use static RAM, a much faster (and expensive) type of memory than the DRAM memory used as the main system memory. The caches are part of the processor itself and for the pricier memory we get very low latency. One way in which instruction-level optimization is still very relevant is code size. Due to caching, there can be massive performance differences between code that fits wholly into the L1/L2 caches and code that needs to be marshalled into and out of the caches as it executes.

Normally when the CPU needs to touch the contents of a memory region they must either be in the L1/L2 caches already or be brought in from the main system memory. Here we see our first major hit, a massive ~250 cycles of latency that often leads to a stall, when the CPU has no work to do while it waits. To put this into perspective, reading from L1 cache is like grabbing a piece of paper from your desk (3 seconds), L2 cache is picking up a book from a nearby shelf (14 seconds), and main system memory is taking a 4-minute walk down the hall to buy a Twix bar.

The exact latency of main memory is variable and depends on the application and many other factors. For example, it depends on the CAS latency and specifications of the actual RAM stick that is in the computer. It also depends on how successful the processor is at prefetching – guessing which parts of memory will be needed based on the code that is executing and having them brought into the caches ahead of time.

Looking at L1/L2 cache performance versus main memory performance, it is clear how much there is to gain from larger L2 caches and from applications designed to use it well. For a discussion of all things memory, see Ulrich Drepper’s What Every Programmer Should Know About Memory (pdf), a fine paper on the subject.

People refer to the bottleneck between CPU and memory as the von Neumann bottleneck. Now, the front side bus bandwidth, ~10GB/s, actually looks decent. At that rate, you could read all of 8GB of system memory in less than one second or read 100 bytes in 10ns. Sadly this throughput is a theoretical maximum (unlike most others in the diagram) and cannot be achieved due to delays in the main RAM circuitry. Many discrete wait periods are required when accessing memory. The electrical protocol for access calls for delays after a memory row is selected, after a column is selected, before data can be read reliably, and so on. The use of capacitors calls for periodic refreshes of the data stored in memory lest some bits get corrupted, which adds further overhead. Certain consecutive memory accesses may happen more quickly but there are still delays, and more so for random access. Latency is always present.

Down in the southbridge we have a number of other buses (e.g., PCIe, USB) and peripherals connected:

Latency and throughput in an Intel Core 2 Duo computer, South Side

Sadly the southbridge hosts some truly sluggish performers, for even main memory is blazing fast compared to hard drives. Keeping with the office analogy, waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months. This is why so many workloads are dominated by disk I/O and why database performance can drive off a cliff once the in-memory buffers are exhausted. It is also why plentiful RAM (for buffering) and fast hard drives are so important for overall system performance.

While the “sustained” disk throughput is real in the sense that it is actually achieved by the disk in real-world situations, it does not tell the whole story. The bane of disk performance are seeks, which involve moving the read/write heads across the platter to the right track and then waiting for the platter to spin around to the right position so that the desired sector can be read. Disk RPMs refer to the speed of rotation of the platters: the faster the RPMs, the less time you wait on average for the rotation to give you the desired sector, hence higher RPMs mean faster disks. A cool place to read about the impact of seeks is the paper where a couple of Stanford grad students describe the Anatomy of a Large-Scale Hypertextual Web Search Engine (pdf).

When the disk is reading one large continuous file it achieves greater sustained read speeds due to the lack of seeks. Filesystem defragmentation aims to keep files in continuous chunks on the disk to minimize seeks and boost throughput. When it comes to how fast a computer feels, sustained throughput is less important than seek times and the number of random I/O operations (reads/writes) that a disk can do per time unit. Solid state disks can make for a great option here.

Hard drive caches also help performance. Their tiny size – a 16MB cache in a 750GB drive covers only 0.002% of the disk – suggest they’re useless, but in reality their contribution is allowing a disk to queue up writes and then perform them in one bunch, thereby allowing the disk to plan the order of the writes in a way that – surprise – minimizes seeks. Reads can also be grouped in this way for performance, and both the OS and the drive firmware engage in these optimizations.

Finally, the diagram has various real-world throughputs for networking and other buses. Firewire is shown for reference but is not available natively in the Intel X48 chipset. It’s fun to think of the Internet as a computer bus. The latency to a fast website (say, google.com) is about 45ms, comparable to hard drive seek latency. In fact, while hard drives are 5 orders of magnitude removed from main memory, they’re in the same magnitude as the Internet. Residential bandwidth still lags behind that of sustained hard drive reads, but the ‘network is the computer’ in a pretty literal sense now. What happens when the Internet is faster than a hard drive?

I hope this diagram is useful. It’s fascinating for me to look at all these numbers together and see how far we’ve come. Sources are posted as a comment. I posted a full diagram showing both north and south bridges here if you’re interested.

165 Comments

First Recorded Usage of "Hacker"

Here’s the first known recorded usage of the word “hacker” in the tech sense, published in 1963 in MIT’s The Tech newspaper:

First recorded usage of hacker

It was tracked down by Fred Shapiro, editor of The Yale Dictionary Of Quotations and author of a paper with a most hilarious and offensive name. The MIT article dispels the common notion that “hacker” was a purely white-hat term later corrupted by the media. The black-hat connotation was there early on; Richard Stallman was 10 years old when this was printed.

Most of what the article describes is now known as phone phreaking, though some sounds like war dialing. It is fascinating this was taking place in 63, over four decades ago! The international phreaking scene was still strong through the late 90s, abusing home country direct lines with software like BlueBEEP (“when freedom is outlawed, only outlaws will be free”), plus cell phone cloning and whatnot. That’s nearly 40 years of phreaking, though phone companies have since managed to stop widespread fraud.

A hacker Hacking

This of course doesn’t “prove” that the black-hat meaning is the “true” meaning of hacker, just as “one who is employed doing mathematical calculations” is not the “true” meaning of computer. Language is fluid. The New Hacker’s Dictionary has this to say in the word’s definition:

  1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary.
  2. One who programs enthusiastically (even obsessively) or who enjoys programming rather than just theorizing about programming. (…)
  3. [deprecated] A malicious meddler who tries to discover sensitive information by poking around. Hence “password hacker”, “network hacker”. The correct term for this sense is cracker.
The white-hat definitions are popular among geeks, but I’m not so sure about the deprecation. For one thing, cracker has a precise and also popular meaning: one who removes copyright protection from software. Meanwhile, blackhats aren’t exactly rolling over to surrender their language either. From the latest Phrack issue:

So no, I wasn’t that kid that used to hang out at Radio Shack pulling apart electronic equipment and reassembling it to “see how it works.” (…) that doesn’t make you a “hacker” – it makes you a wannabe EE undergrad. (…) Hacking boxes makes you a “hacker” ! That’s right! Write your local representatives at Wikipedia / urbandictionary / OED and let them know that hackers are people that gain unauthorized access/privileges to computerized systems!
The whole thing would offend most people, so no link. It’s readily googable but be careful about browser exploits, you never know. As a neutral party, Wikipedia has sensible guidelines when it comes to controversial names:

A city, country, people or person, by contrast, is a self-identifying entity: it has a preferred name for itself. The city formerly called Danzig now calls itself Gdansk; the man formerly known as Cassius Clay now calls himself Muhammad Ali. These names are not simply arbitrary terms but are key statements of an entity’s own identity. This should always be borne in mind when dealing with controversies involving self-identifying names. (…)

A number of objective criteria can be used to determine common or self-identifying usage: Is the name in common usage in English? (…) Is it the name used by the subject to describe itself or themselves?

These criteria apply squarely to both types of “hackers.” Common usage? Check. Used to describe themselves? Definitely. The word is now hopelessly ambiguous, as it seems to have been from the start, puzzling outsiders. But it’s always clear to hackers.

22 Comments

Richard Feynman’s Modest Science

When I was 18 and newly arrived in the US, I used to wonder around enjoying new features like the rule of law and great libraries everywhere. Once while bumming out in North Denver I went into the Regis University library determined to read about physics. I had tried that once before, back in my high school, with poor results. As a teenager I had been obsessed with “understanding” physics and chemistry, especially atomic and quantum theory. I didn’t know enough math to study the subjects deeply, but I wanted a conceptual grasp, however incomplete, that was at least half-way consistent and clear.

My high school books and classes left me with the strong feeling that I simply did not get physics. Try as I might, I could not accept the bizarre results of quantum mechanics, wave-particle duality, or how Heisenberg’s uncertainty principle could even be science. I was baffled. When I brought it up with teachers, they had ready-made analogies to “teach” what happened in this sub-atomic world. “Think of the solar system,” “think of springs connected to each other,” “well, it’s like this, suppose you have…” These analogies didn’t help at all. “You think too concretely, that’s why you can’t visualize it,” told me a teacher.

So I’d sit there and try things, think nonverbally, think in wild shapes, somehow think differently to see if I could imagine a sub-atomic particle and “get it.” No go. I wondered whether programming had perhaps damaged my mind by making it inflexible. I went to the school library and found a more advanced physics book, a bit tattered but no matter. I quit reading when I realized the book still assumed the existence of the ether. “Screw this,” I thought. So I flipped off the science bit, kept to my computers, and carried on.

Feynman Lectures on Physics

But here I was in the USA, land of opportunity and well-stocked libraries. Looking in the physics section I saw “The Feynman Lectures on Physics” sitting there, three volumes. I had a vague idea of who Feynman was, so I picked up the books and went straight to Volume 3, Chapter 1, Quantum Behavior. In the very first page he comes right out and says:

Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen. (…)

Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone—both to the novice and the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct human experience and human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way.

I felt a rush of enthusiasm reading this. It was so humble and visceral and honest. This was science in a way I had never seen before, simultaneously more rigorous and human. That first page alone drove a sledgehammer to my worldview and started rebuilding it. Perhaps childishly, I thought of the Hacker’s Manifesto: “we’ve been spoon-fed baby food at school when we hungered for steak.” I had just found one hell of a juicy stake. At one point Feynman asks students to imagine the various electromagnetic fields and waves in the classroom: coming from the earth’s interior, carrying radio and TV signals, traveling from warm foreheads to the blackboard, and so on. Then he says:

I have asked you to imagine these electric and magnetic fields. What do you do? Do you know how? How do I imagine the electric and magnetic field? What do I actually see? What are the demands of the scientific imagination? Is it any different from trying to imagine that the room is full of invisible angels? No, it is not like imagining invisible angels. It requires a much higher degree of imagination (…). Why? Because invisible angels are understandable. (…) So you say, “Professor, please give me an approximate description of the electromagnetic waves, even though it may be slightly innacurate, so that I too can see them as well as I can see almost-invisible angels. Then I will modify the picture to the necessary abstraction.”

I’m sorry I can’t do that for you. I don’t know how. I have no picture of this electromagnetic field that is in any sense accurate. (…) So if you have some difficulty in making such a picture, you should not be worried that your difficulty is unusual.

Volume 2, pages 20-9 and 20-10

Surely you’re joking – you don’t know?? I could hardly believe what I was reading. I had been hoping for a better explanation – a masterful analogy of weights on springs that would allow me to really understand physics. Instead, here was a Nobel laureate telling me that he didn’t really understand it either – not in the definite, make-believe fashion of high school science. Feynman lifted the veil for me – all my sanitized textbooks and uninspired teachers presented science with finality and devoid of context, as if the gods had handed down a few scientific models to us. Analogies that were meant to “help understand” reality had in fact supplanted it; it was not simplification, but a gross distortion of what science really is. This fake teaching would never say that atomic behavior is “peculiar and mysterious” because “human intuition applies to large objects.” No, its entire aim was to pretend that science is not mysterious.

Feynman embraces the whole of science: its beauty, its methods, the history and relationships of its ideas, how our minds react to it, and above all how it stands before the ultimate judge, nature. He’s at once fiercely empirical yet mindful of the crucial human context surrounding scientific ideas. The lectures are not only great technical writing but also a deep look into how we think about the world, into reason and the nature of knowledge. Of course, much of the work is to be done with paper, pencil, and math. Back then I didn’t even know calculus, so I couldn’t really follow all the equations. But the books still gave me what I was looking for, and then some.

Now I have an undergrad in math, which puts me roughly in the 18th century, but better equipped to learn on my own. Some day I hope to take time off and hit physics again. If you want to read more of his stuff, Feynman wrote an insightful essay on engineering and there’s the classic Cargo Cult Science, both online. Amazon has the lectures along with other books, and so might your local library. :)

24 Comments

CPU Rings, Privilege, and Protection

You probably know intuitively that applications have limited powers in Intel x86 computers and that only operating system code can perform certain tasks, but do you know how this really works? This post takes a look at x86 privilege levels, the mechanism whereby the OS and CPU conspire to restrict what user-mode programs can do. There are four privilege levels, numbered 0 (most privileged) to 3 (least privileged), and three main resources being protected: memory, I/O ports, and the ability to execute certain machine instructions. At any given time, an x86 CPU is running in a specific privilege level, which determines what code can and cannot do. These privilege levels are often described as protection rings, with the innermost ring corresponding to highest privilege. Most modern x86 kernels use only two privilege levels, 0 and 3:

x86 Protectiong Rings
x86 Protection Rings

About 15 machine instructions, out of dozens, are restricted by the CPU to ring zero. Many others have limitations on their operands. These instructions can subvert the protection mechanism or otherwise foment chaos if allowed in user mode, so they are reserved to the kernel. An attempt to run them outside of ring zero causes a general-protection exception, like when a program uses invalid memory addresses. Likewise, access to memory and I/O ports is restricted based on privilege level. But before we look at protection mechanisms, let’s see exactly how the CPU keeps track of the current privilege level, which involves the segment selectors from the previous post. Here they are:

x86 Segment Selectors
Segment Selectors – Data and Code

The full contents of data segment selectors are loaded directly by code into various segment registers such as ss (stack segment register) and ds (data segment register). This includes the contents of the Requested Privilege Level (RPL) field, whose meaning we tackle in a bit. The code segment register (cs) is, however, magical. First, its contents cannot be set directly by load instructions such as mov, but rather only by instructions that alter the flow of program execution, like call. Second, and importantly for us, instead of an RPL field that can be set by code, cs has a Current Privilege Level (CPL) field maintained by the CPU itself. This 2-bit CPL field in the code segment register is always equal to the CPU’s current privilege level. The Intel docs wobble a little on this fact, and sometimes online documents confuse the issue, but that’s the hard and fast rule. At any time, no matter what’s going on in the CPU, a look at the CPL in cs will tell you the privilege level code is running with.

Keep in mind that the CPU privilege level has nothing to do with operating system users. Whether you’re root, Administrator, guest, or a regular user, it does not matter. All user code runs in ring 3 and all kernel code runs in ring 0, regardless of the OS user on whose behalf the code operates. Sometimes certain kernel tasks can be pushed to user mode, for example user-mode device drivers in Windows Vista, but these are just special processes doing a job for the kernel and can usually be killed without major consequences.

Due to restricted access to memory and I/O ports, user mode can do almost nothing to the outside world without calling on the kernel. It can’t open files, send network packets, print to the screen, or allocate memory. User processes run in a severely limited sandbox set up by the gods of ring zero. That’s why it’s impossible, by design, for a process to leak memory beyond its existence or leave open files after it exits. All of the data structures that control such things – memory, open files, etc – cannot be touched directly by user code; once a process finishes, the sandbox is torn down by the kernel. That’s why our servers can have 600 days of uptime – as long as the hardware and the kernel don’t crap out, stuff can run for ever. This is also why Windows 95 / 98 crashed so much: it’s not because “M$ sucks” but because important data structures were left accessible to user mode for compatibility reasons. It was probably a good trade-off at the time, albeit at high cost.

The CPU protects memory at two crucial points: when a segment selector is loaded and when a page of memory is accessed with a linear address. Protection thus mirrors memory address translation where both segmentation and paging are involved. When a data segment selector is being loaded, the check below takes place:

x86 Segment Protection
x86 Segment Protection

Since a higher number means less privilege, MAX() above picks the least privileged of CPL and RPL, and compares it to the descriptor privilege level (DPL). If the DPL is higher or equal, then access is allowed. The idea behind RPL is to allow kernel code to load a segment using lowered privilege. For example, you could use an RPL of 3 to ensure that a given operation uses segments accessible to user-mode. The exception is for the stack segment register ss, for which the three of CPL, RPL, and DPL must match exactly.

In truth, segment protection scarcely matters because modern kernels use a flat address space where the user-mode segments can reach the entire linear address space. Useful memory protection is done in the paging unit when a linear address is converted into a physical address. Each memory page is a block of bytes described by a page table entry containing two fields related to protection: a supervisor flag and a read/write flag. The supervisor flag is the primary x86 memory protection mechanism used by kernels. When it is on, the page cannot be accessed from ring 3. While the read/write flag isn’t as important for enforcing privilege, it’s still useful. When a process is loaded, pages storing binary images (code) are marked as read only, thereby catching some pointer errors if a program attempts to write to these pages. This flag is also used to implement copy on write when a process is forked in Unix. Upon forking, the parent’s pages are marked read only and shared with the forked child. If either process attempts to write to the page, the processor triggers a fault and the kernel knows to duplicate the page and mark it read/write for the writing process.

Finally, we need a way for the CPU to switch between privilege levels. If ring 3 code could transfer control to arbitrary spots in the kernel, it would be easy to subvert the operating system by jumping into the wrong (right?) places. A controlled transfer is necessary. This is accomplished via gate descriptors and via the sysenter instruction. A gate descriptor is a segment descriptor of type system, and comes in four sub-types: call-gate descriptor, interrupt-gate descriptor, trap-gate descriptor, and task-gate descriptor. Call gates provide a kernel entry point that can be used with ordinary call and jmp instructions, but they aren’t used much so I’ll ignore them. Task gates aren’t so hot either (in Linux, they are only used in double faults, which are caused by either kernel or hardware problems).

That leaves two juicier ones: interrupt and trap gates, which are used to handle hardware interrupts (e.g., keyboard, timer, disks) and exceptions (e.g., page faults, divide by zero). I’ll refer to both as an “interrupt”. These gate descriptors are stored in the Interrupt Descriptor Table (IDT). Each interrupt is assigned a number between 0 and 255 called a vector, which the processor uses as an index into the IDT when figuring out which gate descriptor to use when handling the interrupt. Interrupt and trap gates are nearly identical. Their format is shown below along with the privilege checks enforced when an interrupt happens. I filled in some values for the Linux kernel to make things concrete.

Interrupt Descriptor with Privilege Check
Interrupt Descriptor with Privilege Check

Both the DPL and the segment selector in the gate regulate access, while segment selector plus offset together nail down an entry point for the interrupt handler code. Kernels normally use the segment selector for the kernel code segment in these gate descriptors. An interrupt can never transfer control from a more-privileged to a less-privileged ring. Privilege must either stay the same (when the kernel itself is interrupted) or be elevated (when user-mode code is interrupted). In either case, the resulting CPL will be equal to to the DPL of the destination code segment; if the CPL changes, a stack switch also occurs. If an interrupt is triggered by code via an instruction like int n, one more check takes place: the gate DPL must be at the same or lower privilege as the CPL. This prevents user code from triggering random interrupts. If these checks fail – you guessed it – a general-protection exception happens. All Linux interrupt handlers end up running in ring zero.

During initialization, the Linux kernel first sets up an IDT in setup_idt() that ignores all interrupts. It then uses functions in include/asm-x86/desc.h to flesh out common IDT entries in arch/x86/kernel/traps_32.c. In Linux, a gate descriptor with “system” in its name is accessible from user mode and its set function uses a DPL of 3. A “system gate” is an Intel trap gate accessible to user mode. Otherwise, the terminology matches up. Hardware interrupt gates are not set here however, but instead in the appropriate drivers.

Three gates are accessible to user mode: vectors 3 and 4 are used for debugging and checking for numeric overflows, respectively. Then a system gate is set up for the SYSCALL_VECTOR, which is 0x80 for the x86 architecture. This was the mechanism for a process to transfer control to the kernel, to make a system call, and back in the day I applied for an “int 0x80” vanity license plate :). Starting with the Pentium Pro, the sysenter instruction was introduced as a faster way to make system calls. It relies on special-purpose CPU registers that store the code segment, entry point, and other tidbits for the kernel system call handler. When sysenter is executed the CPU does no privilege checking, going immediately into CPL 0 and loading new values into the registers for code and stack (cs, eip, ss, and esp). Only ring zero can load the sysenter setup registers, which is done in enable_sep_cpu().

Finally, when it’s time to return to ring 3, the kernel issues an iret or sysexit instruction to return from interrupts and system calls, respectively, thus leaving ring 0 and resuming execution of user code with a CPL of 3. Vim tells me I’m approaching 1,900 words, so I/O port protection is for another day. This concludes our tour of x86 rings and protection. Thanks for reading!

69 Comments

Memory Translation and Segmentation

This post is the first in a series about memory and protection in Intel-compatible (x86) computers, going further down the path of how kernels work. As in the boot series, I’ll link to Linux kernel sources but give Windows examples as well (sorry, I’m ignorant about the BSDs and the Mac, but most of the discussion applies). Let me know what I screw up.

In the chipsets that power Intel motherboards, memory is accessed by the CPU via the front side bus, which connects it to the northbridge chip. The memory addresses exchanged in the front side bus are physical memory addresses, raw numbers from zero to the top of the available physical memory. These numbers are mapped to physical RAM sticks by the northbridge. Physical addresses are concrete and final – no translation, no paging, no privilege checks – you put them on the bus and that’s that. Within the CPU, however, programs use logical memory addresses, which must be translated into physical addresses before memory access can take place. Conceptually address translation looks like this:

Memory address translation
Memory address translation in x86 CPUs with paging enabled

This is not a physical diagram, only a depiction of the address translation process, specifically for when the CPU has paging enabled. If you turn off paging, the output from the segmentation unit is already a physical address; in 16-bit real mode that is always the case. Translation starts when the CPU executes an instruction that refers to a memory address. The first step is translating that logic address into a linear address. But why go through this step instead of having software use linear (or physical) addresses directly? For roughly the same reason humans have an appendix whose primary function is getting infected. It’s a wrinkle of evolution. To really make sense of x86 segmentation we need to go back to 1978.

The original 8086 had 16-bit registers and its instructions used mostly 8-bit or 16-bit operands. This allowed code to work with 216 bytes, or 64K of memory, yet Intel engineers were keen on letting the CPU use more memory without expanding the size of registers and instructions. So they introduced segment registers as a means to tell the CPU which 64K chunk of memory a program’s instructions were going to work on. It was a reasonable solution: first you load a segment register, effectively saying “here, I want to work on the memory chunk starting at X”; afterwards, 16-bit memory addresses used by your code are interpreted as offsets into your chunk, or segment. There were four segment registers: one for the stack (ss), one for program code (cs), and two for data (ds, es). Most programs were small enough back then to fit their whole stack, code, and data each in a 64K segment, so segmentation was often transparent.

Nowadays segmentation is still present and is always enabled in x86 processors. Each instruction that touches memory implicitly uses a segment register. For example, a jump instruction uses the code segment register (cs) whereas a stack push instruction uses the stack segment register (ss). In most cases you can explicitly override the segment register used by an instruction. Segment registers store 16-bit segment selectors; they can be loaded directly with instructions like MOV. The sole exception is cs, which can only be changed by instructions that affect the flow of execution, like CALL or JMP. Though segmentation is always on, it works differently in real mode versus protected mode.

In real mode, such as during early boot, the segment selector is a 16-bit number specifying the physical memory address for the start of a segment. This number must somehow be scaled, otherwise it would also be limited to 64K, defeating the purpose of segmentation. For example, the CPU could use the segment selector as the 16 most significant bits of the physical memory address (by shifting it 16 bits to the left, which is equivalent to multiplying by 216). This simple rule would enable segments to address 4 gigs of memory in 64K chunks, but it would increase chip packaging costs by requiring more physical address pins in the processor. So Intel made the decision to multiply the segment selector by only 24 (or 16), which in a single stroke confined memory to about 1MB and unduly complicated translation. Here’s an example showing a jump instruction where cs contains 0x1000:

Real mode segmentation
Real mode segmentation

Real mode segment starts range from 0 all the way to 0xFFFF0 (16 bytes short of 1 MB) in 16-byte increments. To these values you add a 16-bit offset (the logical address) between 0 and 0xFFFF. It follows that there are multiple segment/offset combinations pointing to the same memory location, and physical addresses fall above 1MB if your segment is high enough (see the infamous A20 line). Also, when writing C code in real mode a far pointer is a pointer that contains both the segment selector and the logical address, which allows it to address 1MB of memory. Far indeed. As programs started getting bigger and outgrowing 64K segments, segmentation and its strange ways complicated development for the x86 platform. This may all sound quaintly odd now but it has driven programmers into the wretched depths of madness.

In 32-bit protected mode, a segment selector is no longer a raw number, but instead it contains an index into a table of segment descriptors. The table is simply an array containing 8-byte records, where each record describes one segment and looks thus:

Segment descriptor
Segment descriptor

There are three types of segments: code, data, and system. For brevity, only the common features in the descriptor are shown here. The base address is a 32-bit linear address pointing to the beginning of the segment, while the limit specifies how big the segment is. Adding the base address to a logical memory address yields a linear address. DPL is the descriptor privilege level; it is a number from 0 (most privileged, kernel mode) to 3 (least privileged, user mode) that controls access to the segment.

These segment descriptors are stored in two tables: the Global Descriptor Table (GDT) and the Local Descriptor Table (LDT). Each CPU (or core) in a computer contains a register called gdtr which stores the linear memory address of the first byte in the GDT. To choose a segment, you must load a segment register with a segment selector in the following format:

Segment Selector
Segment Selector

The TI bit is 0 for the GDT and 1 for the LDT, while the index specifies the desired segment selector within the table. We’ll deal with RPL, Requested Privilege Level, later on. Now, come to think of it, when the CPU is in 32-bit mode registers and instructions can address the entire linear address space anyway, so there’s really no need to give them a push with a base address or other shenanigan. So why not set the base address to zero and let logical addresses coincide with linear addresses? Intel docs call this “flat model” and it’s exactly what modern x86 kernels do (they use the basic flat model, specifically). Basic flat model is equivalent to disabling segmentation when it comes to translating memory addresses. So in all its glory, here’s the jump example running in 32-bit protected mode, with real-world values for a Linux user-mode app:

Protected Mode Segmentation
Protected Mode Segmentation

The contents of a segment descriptor are cached once they are accessed, so there’s no need to actually read the GDT in subsequent accesses, which would kill performance. Each segment register has a hidden part to store the cached descriptor that corresponds to its segment selector. For more details, including more info on the LDT, see chapter 3 of the Intel System Programming Guide Volume 3a. Volumes 2a and 2b, which cover every x86 instruction, also shed light on the various types of x86 addressing operands – 16-bit, 16-bit with segment selector (which can be used by far pointers), 32-bit, etc.

In Linux, only 3 segment descriptors are used during boot. They are defined with the GDT_ENTRY macro and stored in the boot_gdt array. Two of the segments are flat, addressing the entire 32-bit space: a code segment loaded into cs and a data segment loaded into the other segment registers. The third segment is a system segment called the Task State Segment. After boot, each CPU has its own copy of the GDT. They are all nearly identical, but a few entries change depending on the running process. You can see the layout of the Linux GDT in segment.h and its instantiation is here. There are four primary GDT entries: two flat ones for code and data in kernel mode, and another two for user mode. When looking at the Linux GDT, notice the holes inserted on purpose to align data with CPU cache lines – an artifact of the von Neumann bottleneck that has become a plague. Finally, the classic “Segmentation fault” Unix error message is not due to x86-style segments, but rather invalid memory addresses normally detected by the paging unit – alas, topic for an upcoming post.

Intel deftly worked around their original segmentation kludge, offering a flexible way for us to choose whether to segment or go flat. Since coinciding logical and linear addresses are simpler to handle, they became standard, such that 64-bit mode now enforces a flat linear address space. But even in flat mode segments are still crucial for x86 protection, the mechanism that defends the kernel from user-mode processes and every process from each other. It’s a dog eat dog world out there! In the next post, we’ll take a peek at protection levels and how segments implement them.

Thanks to Nate Lawson for a correction in this post.

54 Comments

Lucky to Be a Programmer

For the past few weeks I’ve been working with a fellow developer on a project that required an all-out programming effort. It’s done now, so we’re back to a regular schedule, but when people hear about the crazy hours they often say they’re sorry. They really shouldn’t be. I would never do this often, or for long periods, or without proper compensation if done for an employer, but the truth is that these programming blitzkriegs are some of my favorite periods in life. Under the right conditions, writing software is so intensely pleasurable it should be illegal.

Many programmers relate to this, but others are taken aback when they hear it. I think it’s because institutions are so good at squeezing the fun out of everything. It’s appalling for example how schools can take the most vibrant topics and mangle them into formulaic, mediocre slog. And so it is for programming. Many corporations turn an inherently rewarding experience into something people just barely stomach in exchange for a paycheck.

That’s too bad. Few things are better than spending time in a creative haze, consumed by ideas, watching your work come to life, going to bed eager to wake up quickly and go try things out. I am not suggesting that excessive hours are needed or even advisable; a sane schedule is a must except for occasional binges. The point is that programming is an intense creative pleasure, a perfect mixture of puzzles, writing, and craftsmanship.

Programming offers intriguing challenges and ample room for invention. Some problems are investigative and reductionist: Why is this code running slowly? What on earth is causing that bug? Others are constructive, like devising algorithms and architectures. All of them are a delight if you enjoy analytical work, immersed in a world full of beasts like malware, routers, caches, protocols, databases, graphs, and numbers.

This analytical side is what most people associate with programming. It does make it interesting, like a complex strategy game. But in most software the primary challenge is communication: with fellow programmers via code and with users via interfaces. By and large, writing code is more essay than puzzle. It is shaping your ideas and schemes into a coherent body; it is seeking clarity, simplicity and conciseness. Both code and interfaces abound with the simple joy of creation.

Another source of pleasure is that under certain conditions, beauty arises in programming. It may sound like bullshit but it’s real, the kind of thing that makes your day better. Take for example Euclid’s 2-line proof that prime numbers are infinite. I think many would find it beautiful – so succint and such a fascinating result. This is the beauty of math, cold and austere, and it pervades software. It is in clever algorithms like quicksort, in the sources of kernels and compilers, in elegant exploits and in the tricks we pull to solve everyday problems. When you see these solutions, be it famous algorithm or mundane trick, you smile and think “how smart” and it feels good. How noble in reason!

A non-math sort of beauty also exists in code, analogous to eloquence in discourse. It’s present in well-factored software that does a lot with little code, in short and crisp methods, in well-done architectures. Some languages make this hard and not all programmers produce it, but it’s a joy to read and work on such code. If you’re working in an expressive language with coworkers whose code you enjoy, it happens often enough to brighten things up.

Now for craftsmanship. In a sense software is abstract – where does program behavior exist but in our minds? Yet we call it building software for a reason. Programs are shaped feature by feature, architectures start out as scaffolds and grow, user interfaces come together, bugs are fixed and hotspots are optimized to make things run fast. Software provides a deeply satisfying sense of craft. We build stuff out of pure ideas and then get to watch it working to solve real problems and make people a little better off. Or far better off, as the case may be.

Take Biology. Despite nearly 400 years of scientific revolution, Biology has been unable to deliver on crucial problems like effective cures for viral infections or cancer. Some of our best progress, like antibiotics, has been due to chance and random experimentation. You start a clinical trial for a hypertension drug and suddenly – whoah – all your subjects have hard-ons! Viagra is born. To be sure, chance plays a role in all endeavours, but Physics and Chemistry have a comprehensive theoretical basis powering systematic improvements, whereas Biology has been largely confined to kludges. Wanna treat cancer? Here, blast the patient with radiation and poison and hopefully the cancer will die first. They’re brilliant kludges, and I’m happy to have them, but it’s a far cry from the precision we’ve had elsewhere.

Software is changing that. Just barely 50 years ago the shape of DNA was being discovered, but now anyone can browse and download hundreds of complete genome sequences. Or look up thousands of genes (DLEC1 for a random example), complete with nucleotide sequence, amino-acid sequence for expressed proteins, literature mentioning the gene, you name it! Or you can search vast gene and protein databases for nucleotide or amino-acid sequences, perhaps after sequencing something in ever-cheaper devices, and get a comprehensive report on the match. It doesn’t matter if they’re exact, because the algorithm in BLAST, the standard sequence search tool, delivers partial maches across databases and species, scored by match likelihood. These advances will enable massive breakthroughs in medicine. Biology is entering a new era, like Physics in the 18th century, propelled by software.

Yea, sure, biologists have a minor role :P, but we in computing increasingly power major developments in science, culture, and business. When a third-world kid looks up a Wikipedia entry, it’s our work too! We wrote the RFCs and the networking stacks, the browser and MediaWiki, the OSes and the HTTP servers. Not to mention a lot of the Wikipedia entries, but since a few were on company time I’ll leave them aside. The influence of technologists goes beyond bits and bytes: it was a programmer who invented wikis and our community started blogs. Henry Mencken pointed out correctly that “freedom of the press is limited to those who own one”. It’s a pity he’s not around to watch our creations break down the stifling conformity and cozy subservience of professional journalism. Less glamorously but to great benefit our applications have delivered steep productivity gains to businesses across the economy. These are a few examples in a long list.

Three years ago, when I finished my undergrad (after being a programmer for many years), I was about to enter med school. At that point, a couple of negative experiences had me somewhat burned out on computer work. I’m happy I stuck with it. I’m still interested in biomedical research, but if I were to get involved I’d rather come in from the software angle, because frankly it’s too much fun to pass on. My mom thinks I’m a typist but oh well.

If you find yourself stuck in a place that’s killing your innate passion for technology, by all means, move the hell on! Don’t stay put while your enthusiasm is slowly drained. It’s hard to find motivated people to hire so you’ve got a major asset already; there are plenty of employers – and companies to be started – that will better suit you. For people who think they might like programming, your mileage may vary, but I highly recommend it as a career. Not only is the outlook bullish on the job front, but as the role of software grows in society we’ll see more exciting and beneficial changes delivered by technology. I’m delighted to be along for the ride as constantly my art and craft I try to master.

PS: thanks for putting up with the irregular posting schedule. The plan is to stick to regular posting now that things have calmed down. And if you like the song, download the mp3 because the YouTube audio doesn’t do it justice.

192 Comments

The Kernel Boot Process

The previous post explained how computers boot up right up to the point where the boot loader, after stuffing the kernel image into memory, is about to jump into the kernel entry point. This last post about booting takes a look at the guts of the kernel to see how an operating system starts life. Since I have an empirical bent I’ll link heavily to the sources for Linux kernel 2.6.25.6 at the Linux Cross Reference. The sources are very readable if you are familiar with C-like syntax; even if you miss some details you can get the gist of what’s happening. The main obstacle is the lack of context around some of the code, such as when or why it runs or the underlying features of the machine. I hope to provide a bit of that context. Due to brevity (hah!) a lot of fun stuff - like interrupts and memory - gets only a nod for now. The post ends with the highlights for the Windows boot.

At this point in the Intel x86 boot story the processor is running in real-mode, is able to address 1 MB of memory, and RAM looks like this for a modern Linux system:

RAM contents after boot loader runs
RAM contents after boot loader is done

The kernel image has been loaded to memory by the boot loader using the BIOS disk I/O services. This image is an exact copy of the file in your hard drive that contains the kernel, e.g. /boot/vmlinuz-2.6.22-14-server. The image is split into two pieces: a small part containing the real-mode kernel code is loaded below the 640K barrier; the bulk of the kernel, which runs in protected mode, is loaded after the first megabyte of memory.

The action starts in the real-mode kernel header pictured above. This region of memory is used to implement the Linux boot protocol between the boot loader and the kernel. Some of the values there are read by the boot loader while doing its work. These include amenities such as a human-readable string containing the kernel version, but also crucial information like the size of the real-mode kernel piece. The boot loader also writes values to this region, such as the memory address for the command-line parameters given by the user in the boot menu. Once the boot loader is finished it has filled in all of the parameters required by the kernel header. It’s then time to jump into the kernel entry point. The diagram below shows the code sequence for the kernel initialization, along with source directories, files, and line numbers:

Architecture-specific Linux Kernel Initialization
Architecture-specific Linux Kernel Initialization

The early kernel start-up for the Intel architecture is in file arch/x86/boot/header.S. It’s in assembly language, which is rare for the kernel at large but common for boot code. The start of this file actually contains boot sector code, a left over from the days when Linux could work without a boot loader. Nowadays this boot sector, if executed, only prints a “bugger_off_msg” to the user and reboots. Modern boot loaders ignore this legacy code. After the boot sector code we have the first 15 bytes of the real-mode kernel header; these two pieces together add up to 512 bytes, the size of a typical disk sector on Intel hardware.

After these 512 bytes, at offset 0x200, we find the very first instruction that runs as part of the Linux kernel: the real-mode entry point. It’s in header.S:110 and it is a 2-byte jump written directly in machine code as 0x3aeb. You can verify this by running hexdump on your kernel image and seeing the bytes at that offset – just a sanity check to make sure it’s not all a dream. The boot loader jumps into this location when it is finished, which in turn jumps to header.S:229 where we have a regular assembly routine called start_of_setup. This short routine sets up a stack, zeroes the bss segment (the area that contains static variables, so they start with zero values) for the real-mode kernel and then jumps to good old C code at arch/x86/boot/main.c:122.

main() does some house keeping like detecting memory layout, setting a video mode, etc. It then calls go_to_protected_mode(). Before the CPU can be set to protected mode, however, a few tasks must be done. There are two main issues: interrupts and memory. In real-mode the interrupt vector table for the processor is always at memory address 0, whereas in protected mode the location of the interrupt vector table is stored in a CPU register called IDTR. Meanwhile, the translation of logical memory addresses (the ones programs manipulate) to linear memory addresses (a raw number from 0 to the top of the memory) is different between real-mode and protected mode. Protected mode requires a register called GDTR to be loaded with the address of a Global Descriptor Table for memory. So go_to_protected_mode() calls setup_idt() and setup_gdt() to install a temporary interrupt descriptor table and global descriptor table.

We’re now ready for the plunge into protected mode, which is done by protected_mode_jump, another assembly routine. This routine enables protected mode by setting the PE bit in the CR0 CPU register. At this point we’re running with paging disabled; paging is an optional feature of the processor, even in protected mode, and there’s no need for it yet. What’s important is that we’re no longer confined to the 640K barrier and can now address up to 4GB of RAM. The routine then calls the 32-bit kernel entry point, which is startup_32 for compressed kernels. This routine does some basic register initializations and calls decompress_kernel(), a C function to do the actual decompression.

decompress_kernel() prints the familiar “Decompressing Linux…” message. Decompression happens in-place and once it’s finished the uncompressed kernel image has overwritten the compressed one pictured in the first diagram. Hence the uncompressed contents also start at 1MB. decompress_kernel() then prints “done.” and the comforting “Booting the kernel.” By “Booting” it means a jump to the final entry point in this whole story, given to Linus by God himself atop Mountain Halti, which is the protected-mode kernel entry point at the start of the second megabyte of RAM (0x100000). That sacred location contains a routine called, uh, startup_32. But this one is in a different directory, you see.

The second incarnation of startup_32 is also an assembly routine, but it contains 32-bit mode initializations. It clears the bss segment for the protected-mode kernel (which is the true kernel that will now run until the machine reboots or shuts down), sets up the final global descriptor table for memory, builds page tables so that paging can be turned on, enables paging, initializes a stack, creates the final interrupt descriptor table, and finally jumps to to the architecture-independent kernel start-up, start_kernel(). The diagram below shows the code flow for the last leg of the boot:

Architecture-independent Linux Kernel Initialization
Architecture-independent Linux Kernel Initialization

start_kernel() looks more like typical kernel code, which is nearly all C and machine independent. The function is a long list of calls to initializations of the various kernel subsystems and data structures. These include the scheduler, memory zones, time keeping, and so on. start_kernel() then calls rest_init(), at which point things are almost all working. rest_init() creates a kernel thread passing another function, kernel_init(), as the entry point. rest_init() then calls schedule() to kickstart task scheduling and goes to sleep by calling cpu_idle(), which is the idle thread for the Linux kernel. cpu_idle() runs forever and so does process zero, which hosts it. Whenever there is work to do – a runnable process – process zero gets booted out of the CPU, only to return when no runnable processes are available.

But here’s the kicker for us. This idle loop is the end of the long thread we followed since boot, it’s the final descendent of the very first jump executed by the processor after power up. All of this mess, from reset vector to BIOS to MBR to boot loader to real-mode kernel to protected-mode kernel, all of it leads right here, jump by jump by jump it ends in the idle loop for the boot processor, cpu_idle(). Which is really kind of cool. However, this can’t be the whole story otherwise the computer would do no work.

At this point, the kernel thread started previously is ready to kick in, displacing process 0 and its idle thread. And so it does, at which point kernel_init() starts running since it was given as the thread entry point. kernel_init() is responsible for initializing the remaining CPUs in the system, which have been halted since boot. All of the code we’ve seen so far has been executed in a single CPU, called the boot processor. As the other CPUs, called application processors, are started they come up in real-mode and must run through several initializations as well. Many of the code paths are common, as you can see in the code for startup_32, but there are slight forks taken by the late-coming application processors. Finally, kernel_init() calls init_post(), which tries to execute a user-mode process in the following order: /sbin/init, /etc/init, /bin/init, and /bin/sh. If all fail, the kernel will panic. Luckily init is usually there, and starts running as PID 1. It checks its configuration file to figure out which processes to launch, which might include X11 Windows, programs for logging in on the console, network daemons, and so on. Thus ends the boot process as yet another Linux box starts running somewhere. May your uptime be long and untroubled.

The process for Windows is similar in many ways, given the common architecture. Many of the same problems are faced and similar initializations must be done. When it comes to boot one of the biggest differences is that Windows packs all of the real-mode kernel code, and some of the initial protected mode code, into the boot loader itself (C:\NTLDR). So instead of having two regions in the same kernel image, Windows uses different binary images. Plus Linux completely separates boot loader and kernel; in a way this automatically falls out of the open source process. The diagram below shows the main bits for the Windows kernel:

Windows Kernel Initialization
Windows Kernel Initialization

The Windows user-mode start-up is naturally very different. There’s no /sbin/init, but rather Csrss.exe and Winlogon.exe. Winlogon spawns Services.exe, which starts all of the Windows Services, and Lsass.exe, the local security authentication subsystem. The classic Windows login dialog runs in the context of Winlogon.

This is the end of this boot series. Thanks everyone for reading and for feedback. I’m sorry some things got superficial treatment; I’ve gotta start somewhere and only so much fits into blog-sized bites. But nothing like a day after the next; my plan is to do regular “Software Illustrated” posts like this series along with other topics. Meanwhile, here are some resources:

  • The best, most important resource, is source code for real kernels, either Linux or one of the BSDs.
  • Intel publishes excellent Software Developer’s Manuals, which you can download for free.
  • Understanding the Linux Kernel is a good book and walks through a lot of the Linux Kernel sources. It’s getting outdated and it’s dry, but I’d still recommend it to anyone who wants to grok the kernel. Linux Device Drivers is more fun, teaches well, but is limited in scope. Finally, Patrick Moroney suggested Linux Kernel Development by Robert Love in the comments for this post. I’ve heard other positive reviews for that book, so it sounds worth checking out.
  • For Windows, the best reference by far is Windows Internals by David Solomon and Mark Russinovich, the latter of Sysinternals fame. This is a great book, well-written and thorough. The main downside is the lack of source code.

[Update: In a comment below, Nix covered a lot of ground on the initial root file system that I glossed over. Thanks to Marius Barbu for catching a mistake where I wrote “CR3” instead of GDTR]

104 Comments

How Computers Boot Up

The previous post described motherboards and the memory map in Intel computers to set the scene for the initial phases of boot. Booting is an involved, hacky, multi-stage affair - fun stuff. Here’s an outline of the process:

Boot Sequence Outline
An outline of the boot sequence

Things start rolling when you press the power button on the computer (no! do tell!). Once the motherboard is powered up it initializes its own firmware – the chipset and other tidbits – and tries to get the CPU running. If things fail at this point (e.g., the CPU is busted or missing) then you will likely have a system that looks completely dead except for rotating fans. A few motherboards manage to emit beeps for an absent or faulty CPU, but the zombie-with-fans state is the most common scenario based on my experience. Sometimes USB or other devices can cause this to happen: unplugging all non-essential devices is a possible cure for a system that was working and suddenly appears dead like this. You can then single out the culprit device by elimination.

If all is well the CPU starts running. In a multi-processor or multi-core system one CPU is dynamically chosen to be the bootstrap processor (BSP) that runs all of the BIOS and kernel initialization code. The remaining processors, called application processors (AP) at this point, remain halted until later on when they are explicitly activated by the kernel. Intel CPUs have been evolving over the years but they’re fully backwards compatible, so modern CPUs can behave like the original 1978 Intel 8086, which is exactly what they do after power up. In this primitive power up state the processor is in real mode with memory paging disabled. This is like ancient MS-DOS where only 1 MB of memory can be addressed and any code can write to any place in memory – there’s no notion of protection or privilege.

Most registers in the CPU have well-defined values after power up, including the instruction pointer (EIP) which holds the memory address for the instruction being executed by the CPU. Intel CPUs use a hack whereby even though only 1MB of memory can be addressed at power up, a hidden base address (an offset, essentially) is applied to EIP so that the first instruction executed is at address 0xFFFFFFF0 (16 bytes short of the end of 4 gigs of memory and well above one megabyte). This magical address is called the reset vector and is standard for modern Intel CPUs.

The motherboard ensures that the instruction at the reset vector is a jump to the memory location mapped to the BIOS entry point. This jump implicitly clears the hidden base address present at power up. All of these memory locations have the right contents needed by the CPU thanks to the memory map kept by the chipset. They are all mapped to flash memory containing the BIOS since at this point the RAM modules have random crap in them. An example of the relevant memory regions is shown below:

Memory Regions During Boot
Important memory regions during boot

The CPU then starts executing BIOS code, which initializes some of the hardware in the machine. Afterwards the BIOS kicks off the Power-on Self Test (POST) which tests various components in the computer. Lack of a working video card fails the POST and causes the BIOS to halt and emit beeps to let you know what’s wrong, since messages on the screen aren’t an option. A working video card takes us to a stage where the computer looks alive: manufacturer logos are printed, memory starts to be tested, angels blare their horns. Other POST failures, like a missing keyboard, lead to halts with an error message on the screen. The POST involves a mixture of testing and initialization, including sorting out all the resources – interrupts, memory ranges, I/O ports – for PCI devices. Modern BIOSes that follow the Advanced Configuration and Power Interface build a number of data tables that describe the devices in the computer; these tables are later used by the kernel.

After the POST the BIOS wants to boot up an operating system, which must be found somewhere: hard drives, CD-ROM drives, floppy disks, etc. The actual order in which the BIOS seeks a boot device is user configurable. If there is no suitable boot device the BIOS halts with a complaint like “Non-System Disk or Disk Error.” A dead hard drive might present with this symptom. Hopefully this doesn’t happen and the BIOS finds a working disk allowing the boot to proceed.

The BIOS now reads the first 512-byte sector (sector zero) of the hard disk. This is called the Master Boot Record and it normally contains two vital components: a tiny OS-specific bootstrapping program at the start of the MBR followed by a partition table for the disk. The BIOS however does not care about any of this: it simply loads the contents of the MBR into memory location 0x7c00 and jumps to that location to start executing whatever code is in the MBR.

Master Boot Record
Master Boot Record

The specific code in the MBR could be a Windows MBR loader, code from Linux loaders such as LILO or GRUB, or even a virus. In contrast the partition table is standardized: it is a 64-byte area with four 16-byte entries describing how the disk has been divided up (so you can run multiple operating systems or have separate volumes in the same disk). Traditionally Microsoft MBR code takes a look at the partition table, finds the (only) partition marked as active, loads the boot sector for that partition, and runs that code. The boot sector is the first sector of a partition, as opposed to the first sector for the whole disk. If something is wrong with the partition table you would get messages like “Invalid Partition Table” or “Missing Operating System.” This message does not come from the BIOS but rather from the MBR code loaded from disk. Thus the specific message depends on the MBR flavor.

Boot loading has gotten more sophisticated and flexible over time. The Linux boot loaders Lilo and GRUB can handle a wide variety of operating systems, file systems, and boot configurations. Their MBR code does not necessarily follow the “boot the active partition” approach described above. But functionally the process goes like this:

  1. The MBR itself contains the first stage of the boot loader. GRUB calls this stage 1.
  2. Due to its tiny size, the code in the MBR does just enough to load another sector from disk that contains additional boostrap code. This sector might be the boot sector for a partition, but could also be a sector that was hard-coded into the MBR code when the MBR was installed.
  3. The MBR code plus code loaded in step 2 then read a file containing the second stage of the boot loader. In GRUB this is GRUB Stage 2, and in Windows Server this is c:\NTLDR. If step 2 fails in Windows you’d get a message like “NTLDR is missing”. The stage 2 code then reads a boot configuration file (e.g., grub.conf in GRUB, boot.ini in Windows). It then presents boot choices to the user or simply goes ahead in a single-boot system.
  4. At this point the boot loader code needs to fire up a kernel. It must know enough about file systems to read the kernel from the boot partition. In Linux this means reading a file like “vmlinuz-2.6.22-14-server” containing the kernel, loading the file into memory and jumping to the kernel bootstrap code. In Windows Server 2003 some of the kernel start-up code is separate from the kernel image itself and is actually embedded into NTLDR. After performing several initializations, NTDLR loads the kernel image from file c:\Windows\System32\ntoskrnl.exe and, just as GRUB does, jumps to the kernel entry point.

There’s a complication worth mentioning (aka, I told you this thing is hacky). The image for a current Linux kernel, even compressed, does not fit into the 640K of RAM available in real mode. My vanilla Ubuntu kernel is 1.7 MB compressed. Yet the boot loader must run in real mode in order to call the BIOS routines for reading from the disk, since the kernel is clearly not available at that point. The solution is the venerable unreal mode. This is not a true processor mode (I wish the engineers at Intel were allowed to have fun like that), but rather a technique where a program switches back and forth between real mode and protected mode in order to access memory above 1MB while still using the BIOS. If you read GRUB source code, you’ll see these transitions all over the place (look under stage2/ for calls to real_to_prot and prot_to_real). At the end of this sticky process the loader has stuffed the kernel in memory, by hook or by crook, but it leaves the processor in real mode when it’s done.

We’re now at the jump from “Boot Loader” to “Early Kernel Initialization” as shown in the first diagram. That’s when things heat up as the kernel starts to unfold and set things in motion. The next post will be a guided tour through the Linux Kernel initialization with links to sources at the Linux Cross Reference. I can’t do the same for Windows ;) but I’ll point out the highlights.

[Update: cleared up discussion of NTLDR.]

185 Comments