Right now is a great time to buy web hosting, with a lot of companies offering kick-ass Christmas deals. If you have been thinking of hosting a web site, check out the Web Hosting Offers over at Web Hosting Talk.
I’m looking at it myself, specifically The Planet’s deal for a dedicated dual Xeon 2.8 GHz with two 10k RPM 73GB disks, 2 GB RAM, 2500 GB bandwidth, free Red Hat Enterprise for $125 a month. I am sharing it with a friend so it’ll work out to only ~60 bucks a month.
So far I’ve grossly neglected the technical operation of this blog in a Cobbler’s Children sort of way. At the same time, interacting with readers, learning, sharing, and talking has been incredible. I can’t tell you how honored I am that you read my posts. It’s time to take it up a notch. I’ll start with the hosting, for which splitting a dedicated server seems like a great way to go.
I’m also looking for a beefier Unauthenticated Windows box for a client, probably a quad-core, 8-gig box with SQL Server. The Planet has one for $675 a month, including RAID 1, two 250-GB hard drives, and a SQL Server processor license.
If you have any suggestions or questions, say it out loud and it’ll be ok.
For 2,000 years scholars held that heavier objects fall faster than lighter ones, partly because Aristotle couldn’t be bothered to take 2 minutes to experiment. Hell, he even wrote that men have more teeth than women. Isn’t that crazy? And yet, people often rely on this kind of fact-free reasoning to arrive at conclusions about computer performance, among other things. Worse, they spend their IT budgets or sacrifice code clarity based on these flawed ideas. In reality computers are far too complex for anyone to handle performance problems by “reasoning” alone.
Think about a routine in a modern jitted language. Right off the bat you face hidden magic like type coercion, boxing, and unboxing. Even if you know the language intimately, unknowns are introduced as your code is optimized first by the compiler, then again by the JIT compiler. It is then fed to the CPU, where optimizations such as branch prediction, memory prefetching and caching have drastic performance implications. What’s worse, much of the above can and does change between different versions of compilers, runtimes, and processors. Your ability to predict what is going to happen is limited indeed.
To take another example, consider a user thinking of RAID-0 to boost performance. Whether there are any gains depends on a host of variables. What are the patterns of the I/O workload? Is it dominated by seeks and random operations, or is there a lot of streaming going on? Reads or writes? How does the kernel I/O scheduler play into it? How smart are the RAID controller and drivers? How will a journaling file system impact performance given the need for write barriers? What stripe sizes and file system block sizes will be used? There are way too many interdependent factors and interactions for speculative analysis. Even kernel developers are stumped by surprising and counterintuitive performance results.
Measurement is the only way to go. Without it, you’re in the speculation realm of performance tuning, the kingdom of fools and the deluded. But even measurement has its problems. Maybe you’re investigating a given algorithm by running it thousands of times in a row and timing the results. Is that really a valid test? By doing so you are measuring a special case where the caches are always hot. Do the conclusions hold in practice? Most importantly, do you know what percentage of time is spent in that algorithm in the normal use of the application? Is it even worth optimizing?
Or say you’ve got a fancy new RAID-0 set up. You run some benchmark that writes large globs of data to the disk and see that your sustained write throughput is twice that of a single disk. Sounds great, too bad it has no bearing on most real-world workloads. The problem with the naive timing test and the benchmark is that they are synthetic measurements. They are scarcely better than speculation.
To tackle performance you must make accurate measurements of real-world workloads and obtain quantitative data. Thus we as developers must be proficient using performance measurement tools. For code this usually means profiling so you know exactly where time is being spent as your app runs. When dealing with complex applications, you may need to build instrumentation to collect enough data. Tools like Cachegrind can help paint a fuller picture of reality.
For website load times and networks you might use tools like WireShark and Fiddler, as Google did for GMail. In databases, use SQL profiling to figure out how much CPU, reading, and writing each query is consuming; these are more telling than the time a query takes to run since the query might be blocked or starved for resources, in which case elapsed time doesn’t mean much. Locks and who is blocking who are also crucial in a database. When looking at a whole system, use your OS tools to record things such as CPU usage, disk queue length, I/Os per second, I/O completion times, swapping activity, and memory usage.
In sum, do what it takes to obtain good data and rely on it. I’m big on empiricism overall, but in performance it is everything. Don’t trust hearsay, don’t assume that what held in version 1 is still true for version 2, question common wisdom
and blog posts like this one. We all make comical mistakes, even Aristotle did. Naturally, it takes theory and analysis to decide what to measure, how to interpret it, and how to make progress. You need real-world measurement plus reasoning. Like science.
Here’s something I’d love to get from Newegg as a Christmas present to all of geekdom: the ability for users to submit a custom-built system as a product, which can then be reviewed, commented on, and bought by others as a single unit. Wouldn’t that be cool? You build your system there, and submit all the parts (sold by Newegg of course) as the package. People can review it normally, just like any other products, and post issues and real-world benchmarks for the systems as comments. Good custom builds would quickly float to the top. Incompatibilities would be washed out immediately.
As a time-constrained geek, I’d be able to log in, look at the top-rated custom builds, and click “Buy” to get the whole thing. No clicking around. Novices would buy knowing the system has been tested out by many others. Any issues would have solutions posted as comments. Newegg would sell more and build even more value into their site.
Do any hardware sites do this? Am I missing something obvious? If not, then, pretty please, Newegg?
Quick how to on installing VMware server under Ubuntu.
#become root sudo -i
Open file /etc/apt/sources.list You’ll see the following commented lines (starting at line 50):
# deb http://archive.canonical.com/ubuntu gutsy partner # deb-src http://archive.canonical.com/ubuntu gutsy partner
Uncomment both of them by removing the leading # character. Save the file, exit. Update the cached package lists (this will add packages from the sources we added above):
Now we’re ready to install VMware. If you’re using 64-bit Ubuntu (which is my case), then some additional packages will be installed along with the VMware package. This is because VMware is a 32-bit binary and needs 32-bit libraries. But apt-get takes care of everything for us. Run:
apt-get install vmware-server
After the install, a text-mode wizard pops up. Press tab to go to ok, press enter to ok, enter for Yes, and then type a VMware serial number. You can get the serial number from http://register.vmware.com/content/registration.html. Make sure you select Linux in VMware’s page. I usually ask for 100 serial numbers in one shot, save them to a text file, and use them one at a time. Either way, get your number and enter it into the wizard.
VMware Server is now installed. Since it’s a server product you are not required to run the X Window System in the host machine (which I don’t). You can start, stop, suspend, tickle and fondle your virtual machines using the command-line binary /usr/bin/vmware-cmd. Sadly you do need a graphical interface when you first create a virtual machine. Hence you must run the VMware Server Console, which does require X or Microsoft Windows. In Linux, the binary for the VMware Server Console is /usr/bin/vmware. You have three options:
- Install X in the host machine itself (the one where we just installed VMware). I never install X on my servers, so this is not an option for me. But if the host machine already has X or if you don’t mind installing it, this is the easiest solution.
- Run VMware Server Console from another Linux box that does have X installed. You do need to install VMware in the remote machine to get the Console binary. If the remote machine is Ubuntu, you already know what to do
- Run VMware Server Console from a Microsoft Windows machine. Since MS-Windows is my desktop of choice, that’s what I do.
The upcoming VMware Server 2.0 has a web-based server console, so there will be no need worry about the console. Any machine with a browser will do (go VMware!). Also, the console is rarely used – pretty much only when you’re adding a new virtual machine. Now, there is a bug in the vmware-server package that prevents a remote VMware Server Console from connecting to the host machine. If you try that now, you’ll get the dreaded:
There was a problem connecting: Login (username/password) incorrect
To troubleshoot this issue you would look into /var/log/auth.log. Here’s what happens when I do tail -f /var/log/auth.log and then try to connect remotely using VMware Server Console:
Mar 15 17:23:27 subhuman vmware-authd: PAM (vmware-authd) illegal module type: @include Mar 15 17:23:27 subhuman vmware-authd: PAM pam_parse: expecting return value; [...common-auth] Mar 15 17:23:27 subhuman vmware-authd: PAM (vmware-authd) no module name supplied Mar 15 17:23:27 subhuman vmware-authd: PAM unable to dlopen(<*unknown module path*>) Mar 15 17:23:27 subhuman vmware-authd: PAM [error: <*unknown module path*>: cannot open shared object file: No such file or directory] Mar 15 17:23:27 subhuman vmware-authd: PAM adding faulty module: <*unknown module path*>
So it’s a PAM issue. It turns out the VMware server package installs a bad /etc/pam.d/vmware-authd file. Now, one of the best aspects of Ubuntu is its awesome community: you can google almost any problem and find an up-to-date, correct answer. This is true in this case, and the solution is given by Walter Tautz in the bug report for this issue. Edit /etc/pam.d/vmware-authd file and replace its contents with:
auth sufficient /usr/lib/vmware-server/lib/libpam.so.0/security/pam_unix.so shadow nullok auth required /usr/lib/vmware-server/lib/libpam.so.0/security/pam_unix_auth.so shadow nullok account sufficient /usr/lib/vmware-server/lib/libpam.so.0/security/pam_unix.so account required /usr/lib/vmware-server/lib/libpam.so.0/security/pam_unix_acct.so #%PAM-1.0 @include common-auth @include common-account
And we’re done. You have VMware running and ready to go. Create virtual machines using the Console and manage them via /usr/bin/vmware-cmd.
Most enthusiastic programmers start out as over-engineers. It’s a side effect of caring deeply about our craft. With experience we learn that simplicity is king and less is more, but it takes effort. That’s why I love a quote by Ferdinand Porsche, who designed the original VW Beetle and started the Porsche company, that says:
The perfect race car crosses the finish line in first place and then falls to pieces.
After I first heard it from a
car fanatic good friend, the quote went straight to my wall. If I start gold plating code or requirements, there’s Dr. Porsche to whip my ass back into pragmatism. But no other time is more trying than when I’m building a computer. I just need a RAID 10 array, a quad-core Core 2 Extreme, and 16 gigs of 2-cas-latency RAM! Of course, in a disciplined frame of mind such largesse is simply gross over-engineering. And it’s more fun and challenging to seek an optimal performance/money ratio than to buy your way out of thinking.
Or maybe that’s just what I tell myself when I only have $1,000 bucks to spend. Either way, multi-core CPUs made powerful computers far more affordable. You can build a fine quad-core, 8-gig server within that budget. In my case I wanted a VMware rig to power this website. I talk to other people who are interested in setting up a VMware lab to learn different technologies, yet feel discouraged due to server prices. But building your own server can save 50% off the price for a similar Dell product. When I priced a PowerEdge tower comparable to my custom build, it came out at $1,900 for the Dell versus $930 for custom. That’s actually a good deal for a business. But for home users, building our own is often the best (or only) option. So here is my parts list, along with some tips:
[Update: The server has now been running for about 3 months with continuous uptime (a single reboot). No problems that I know of. However this parts list is about 3 months old and could be improved upon. Take a look at the comments section as people made good suggestions. Here are some ideas: 1) AMD CPUs might be a good way to go, freeing up some money to spend on perhaps more disks, 2) a cheaper motherboard (hopefully with integrated video) would be fine, provided it's a good brand, 3) stick with 16MB-cache hard disks. Just make sure to check the NewEgg reviews: see what people are saying, the overall rating, and the percentage of one-egg reviews (ie, people who hated it). Then read some of the one-egg reviews to find out what the problems are. If they're complaints about Dead On Arrival (DOA) products, then you should be ok since every product has those; but if something more sinister is going on, stay out. If you're going to run Linux, Google quickly for the motherboard name + "Linux" to see what people are saying. Don't trust editor reviews: always check out what the people say. Vox populi, vox dei.]
|Motherboard||MSI P6N SLI Platinum LGA 775 NVIDIA nForce 650i SLI ATX Intel Motherboard – Retail
This is a top-quality motherboard, but make sure you upgrade the BIOS to the newest version. They ship with a buggy BIOS which crashes the machine after the BIOS post when you install four 2-gig RAM modules. This happens for many RAM brands and took 2 hours to track down, the biggest time sink in this build. After upgrading the BIOS all is well. I left the machine running memtest86 for a day and a half with no problems.
The motherboard works well in Linux, but there are some quirks with stock distro kernels due to the newish chipset. As usual Ubuntu had the best support, but HPET is not working. I disabled it in the BIOS for now to prevent "lost some interrupts" log messages. lm-sensors only retrieves CPU core temps and nothing else (in CentOS, lm-sensors didn’t work at all). I’ll look into both of these issues later, but they’re minor annoyances: overall everything works well; HPET is useless anyway.
|CPU||Intel Core 2 Quad Q6600 Kentsfield 2.4GHz LGA 775 Quad-Core Processor Model BX80562Q6600
I don’t normally overclock. Most workloads aren’t CPU-constrained and overclocking often adds cost and headache to little practical advantage. By not overclocking we can use the stock heatsink that comes with the processor, saving some money. My idle core temperatures are at ~30oC; they peaked at ~60oC after a day of Prime95 (one instance per core) with no errors. These are fine temperatures, no need for a 3rd-party heatsink.
|Case||Antec Performance One P180 Silver cold rolled steel ATX Mid Tower Computer Case – Retail
This is a glorious case, but I only bought it because it was on sale for $89.99. It’s pleasurable to work with a good case, but really, you only work with it for one hour while you build the computer. Again, if you’re not overclocking and hence not worried about whether a degree Celsius will corrupt your file system, you can safely buy in the $50-$80 range. Stick to cases without power supplies though, since bundled power supply units are usually crap.
|Power Supply||Antec earthwatts EA380 ATX12V v2.0 380W Power Supply – Retail
People go overboard on power supply wattages. There’s no reason to. All you get for idle wattage is a higher electrical bill and a warmer planet. This is a good-quality, energy-efficient (80+ certified) power supply with plenty of juice for our server. Perfect. Adjust accordingly if you have power-hungry video cards or peripherals to be fed. Always buy a good power supply unit. Power supply problems are disastrous: 1) they can fail and bring your server down, 2) they destabilize and crash the computer, and 3) they fry all your other components. Be economical on the wattage but not on quality: stick to good brands and check the Newegg reviews.
|RAM||G.SKILL 4GB(2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual
Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ – Retail
Great reviews on newegg (5 eggs, 2% 1-egg rate), passed multiple passes of memtest86 for me. It’s a winner.
|$79.99 * 2 = $159.98|
|Video Card||EVGA 256-P2-N297-LX GeForce 6200LE TC 512MB (256MB on board) 64-bit GDDR2
PCI Express x16 Video Card – Retail
I look for two things in video cards: fanless cooling and low power consumption. Fans move, therefore they make noise and fail. You can avoid both with a heatsink-only card. They also tend to draw less power. This one runs cool, supports 2 monitors, comes with an S-cable and DVI adapter, and costs 7 lattes.
|Hard drives||Western Digital Caviar SE WD3200AAJS 320GB 7200 RPM 8MB Cache SATA 3.0Gb/s Hard Drive – OEM
Disk I/O is the most common bottleneck I find when troubleshooting server performance. It’s also the subsystem most affected by virtualization. In fact, CPU and RAM work basically at the same speed in the host and guest. But disk suffers for many reasons. So we go with 2 hard drives. The perfect car breaks apart after the race, but it finishes in first place.
This drive is a solid 5-egger (hahah) with only 2% 1-egg ratings. I trust WD as much as any other. Unfortunately, this was not the drive I bought, but it’s what I would buy now. I bought the SAMSUNG SpinPoint T Series HD501LJ 500GB 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive – OEM for $105. It’s a 5-egg drive too but it has 8% 1-egg ratings, several of which are complaining about failure after a few months. That’s much worse than a drive being dead on arrival. Now I live in phear of my hard disks going bust on me. Yay for Linux software raid.
|2 * 72.99 = $145.98|
This hard drive drawer is a nice touch in the Antec P180. The white rubber things help damp hard drive noise.
I never buy CD/DVD drives or floppy drives. I have a USB Lite-On DVD reader/burner and a USB Sony floppy disk. I’ve used them for years on countless computers, installed all the OSes, flashed the BIOSes, never had a problem. It’s a great solution. You can save money on every computer, plus no worries about powering a DVD burner off your power supply.
For a home server, especially one running VMware, I recommend a simple Uninterruptible Power Supply to prevent file system corruption. VMware is particularly sensitive (I have a detailed entry coming up on this). Since this type of corruption can cost hours and I only have a few hobby hours a week, I’m more than willing to pay for an UPS. I really like this APC unit. If you buy one, make sure you set everything up so that the computer really shuts down if power is lost, otherwise you’re just running a $100 LED display.
Here’s what the full build looks like
The computer runs silently. In fact it’s two feet away from me right now, case open, and I can hardly hear a thing. Assembly time for this box was about one hour. I wasted another two hours figuring out why it crashed with 8 gigs of RAM. Fixing the problem was a matter of minutes: I just flashed the motherboard BIOS and it was cured. And that was the easy part of getting the server up and running. I’ve since been doing some experiments on file system and virtual machine performance, which takes a lot more time than plugging connectors together. I should have some results over the weekend.
Update on 2008-12-23: Doug Holton has posted a new parts list for a quad core server, now down to $500 with 1 TB of storage.
To my everlasting shame, my site has been "powered" so far by a 3-year-old notebook computer with a faulty I/O subsystem. Ugh. The technical depravity alone shocks the conscience, but worse is treating readers with such disregard. Enough of that. Another mistake I made was running nascent blogging software. One month in this setup was enough for me to find out that: 1) I enjoy writing online, and 2) a solid architecture is in order. I want to:
- Buy good hardware.
- Run Windows and Linux. This is essential for me since I work on both.
- Deploy the servers in virtual machines. Life without VMware is nasty, brutish, and short.
- For blogging, use WordPress. At first I evaluated the major .NET blogging engines and picked BlogEngine.NET. It's still in its early stages, but I thought "what the hell: blog a little, patch a little". A few problems and two patches later, I realize the error of mixing these things. BlogEngine.NET is a good piece of software with potential — but for production I need a robust engine with rich functionality that can run unnoticed.
- Run a tight front-end ready to spit out gzipped content with minimal latency. For a hobbyist on a budget, this means open source. I'm thinking about Apache + mod_proxy, maybe Squid, maybe wp_cache.
- Host images outside my network. I pay $60/month for Comcast business internet rated at 1.5 Mbps up, 8 Mbps down. It hardly ever goes below 2 Mbps up, 18 Mbps down. This is plenty of bandwidth for normal traffic and enough to handle spikes if a big site links here. Still, moving images off-site cuts down drastically on the bandwidth. Normal users have a snappier experience and peak handling is far better. With gzipped HTML and no images, the average HTTP response should be about 30K. Assuming 1.5 Mbps up, the setup should handle ~6 requests/second, ~360/minute. That's wildly beyond my traffic and enough to at least cope with a spike.
The sane reader is probably wondering why one would go through these troubles instead of renting shared or dedicated hosting. First, I like to run a lot of quirky things like an open SQL Server shell, Cruise Control for continuous integration, and an SVN repository. I have plans for similar stuff that requires server access. That completely rules out shared hosting. For dedicated hosting, the prices are such that I prefer to buy commodity hardware and pay for business Comcast, which is very reasonable and has been solid. Basically, I'm moving a "lab PC" that I would play with anyway from an internal LAN to the Internet, so my added cost is really the $60/month, which buys you nothing in a dedicated server. And most important, it's just fun
Luckily between the free VMware Server and current hardware prices, you can buy an outstanding machine for less than $1,000 to run both Windows and Linux at fierce speeds. My plan is to run Ubuntu x64 server on the metal, 32-bit Windows Server 2003 in a VM, and 32-bit Ubuntu server in another VM. So far, I have only bought the hardware. I hope to get things running over the weekend. In the next entry I will post the detailed parts list (with links to Newegg). I'm planning two more entries: one on setting up the Ubuntu VM Server and another for the combined IIS + WP/Linux caching solution.
What do you think of this? Any obvious holes? Suggestions? I'm keen on hearing about the caching, since I'm ignorant of both mod_proxy and Squid.