ESXi on Desktop Hardware

So you want an ESXi (vSphere Hypervisor) server, but you don’t want to spend several grand on a blade chassis or enterprise-grade server. Perhaps you want an inexpensive server for home use, and something that’s going to be quieter than the jet-engines that cool the big stuff. So what do you do?

Like a lot of people, you’ll hit up a message board or two. And invariably, someone will link you to the vm-help.com site’s whitebox HCL. It’s a good list if you have a pile of hardware and you want to see what works, but if you’re looking for which system to buy or what components to obtain to build your own system, it’s mostly useless.

One route I’m particularly fond of is desktop hardware. It’s the least expensive way to get a virtualization host, and they’re a heck of a lot quieter than most servers, making them appropriate for putting them in places that aren’t a data center.

So I’ve written this guide on how to build/spec/buy your own ESXi system.

First off, ESXi is available for free. With ESXi 4.x, you were limited to 6 cores and 256 GB of RAM. With ESXi 5, there’s no core limit although you’re limited to 32 GB of RAM (that shouldn’t be a problem). The free license lets you run as many VMs as you can stuff in there, although you can’t do any of the fancy features like integrate with vCenter or vMotion and other fun stuff. But for a home lab or basic virtualization environment, that’s no big deal.

The issue with ESXi is that it’s somewhat picky about the hardware it will run on (although it’s improved with ESXi 4 and 5). Most server-grade hardware will run fine, but if you’re looking to use something a little more pedestrian, some of the out-of-the-box components may not work, or may not have the features you need.

Whether you’re shopping for a motherboard or a pre-built system, I’ve yet to find a fairly recent mid to high-end system that doesn’t load ESXi. Usually the only thing I’ve had to add is a server-grade NIC (recommendations in the NIC section).

For virtualization, RAM is paramount. Get as much RAM as you can. When researching a system, it’s a good idea to make sure it has at least 4 memory DIMM slots. Some of the i7 boards have 6, which is even better. Most 4 DIMM slot motherboards these days can have up to 16 GB of RAM, 6 DIMM slots can do up to 24 GB. Given RAM prices these days, it doesn’t make sense not to fill them to the brim with RAM. I can get 16 GB of good desktop memory for less than $100 USD now.

Haters gonna hate

Also, make sure to get good RAM, and don’t necessarily go for fast RAM, and it’s definitely not a good idea to overclock RAM in a virtualized environment. Remember, this is a virtualization host, not a gaming rig. I tried once using off-brand RAM, and all I got was a face full of purple-screens of death.

For a long time, it seems that processors got all the glory. With virtualization and most virtualization workloads, processors are secondary. Cores are good, speed is OK. RAM is usually the most important aspect. That said, there are a couple of things with processors you want to keep in mind. On the Intel side, new processors such as i5’s or i7’s make good virtualization processor.

Cores

The more cores the better. These days, you can easily afford a quad-core system. I wouldn’t worry too much about the core speed itself, especially on a virtualization system. I would recommend putting more emphasis on the number of cores.

Then there’s hyper threading. A processor with hyper threading support will make 4 cores look like 8 to the the operating system. Each core would have two separate execution contexts. VMware makes pretty good use of hyper threading by being able to put a vCPU (what the virtual machine sees as a core) on each context, so get it if you can.

Proof that I’m, like, technical and stuff.

But again, for most VM workloads don’t go for a monster processor if it means you can only afford 4 GBs of RAM. RAM first, then processor.

Here’s where it gets a bit tricky. There are a number of new features that processors may or may not have that will affect your ability to have a functioning ESXi host.

Virtualization Support

Virtually (get it?) every processor has some sort of virtualization support these days. For Intel, this is typically called VT-x. For AMD processors, this is called AMD-V. You’ll want to double-check, but any processor made in the past five years likely supports these technologies. While there are ways to do virtualization (paravirtualization) on processors without these features, it’s not nearly as full featured and most operating systems won’t run.

Direct Connection

Some hypervisors allow a virtual machine to control a peripheral directly. VMware calls this DirectPath I/O, and other vendors have other names for it. If you had a SATA drive in your virtualization host, you could present it directly to a particular VM. You can also do it for USB devices, network interfaces, crypto accelerators, etc.

Keep in mind if you do this, no other VM will be able to access that particular device. Doing DirectPath I/O (or similar) usually prevents vMotion form occuring.

It’s a somewhat popular feature if you’re building yourself a NAS server inside a virtual system, but otherwise it’s not that popular in the enterprise.

Intel calls this technology VT-d, and AMD calls it IOMMU. Not all processors have these features, so be sure to check it out. You may also need to enable this in your system’s BIOS (sometimes it’s not on by default). For instance, my new Dell XPS 8300 has an i5-2300 processor. It does not support VT-d, although the i5-2400 processor does.

For most setups it’s not a big deal, but if you’ve got a choice, get the processor with the VT-d/IOMMU support.

AES-NI

AES-NI first appeared in laptop and server processors, but is now making it’s way into all of Intel’s chips. Essentially, it’s an extra set of processor instructions specifically geared towards AES encryption. If you have software that’s written to specifically take advantage of these instruction sets, you can significantly speed up encyrption operations. Mac OS X Lion for instance uses AES-NI for File Vault 2, as does BitLocker for Windows 7. Best of all, it’s passed down to the guest VMs so they can take advantage of it as well.

VMWare has been notoriously picky about its networking hardware. The built-in Ethernet ports on most desktop-class motherboards won’t cut it. What you’ll likely need is to buy a server-grade network adapter, and with ESXi 4.0 and on, it will have to be Gigabit or better (there’s no more Fast Ethernet support). ESXi won’t even install unless it detects an appropriate NIC.

This has changed somewhat with ESXi 5.0. The ESXi 5.0 installer comes pre-loaded with more Ethernet drivers than its predacessors, and accept more mundane NICs. On my Dell XPS 8300, the built-in BCM57788 Broadcom-based NIC was recognized with the default ESXi 5.0 installer. It was not with 4.1.

If your NIC is not recognized, my favorite go-to NIC for VMware systems is an Intel Pro 1000 NIC. I can pick a single or dual port NIC off eBay for less than $50 usually. Make sure to get the right kind (most modern systems use PCI-e). Make sure it’s the Pro NICs, and not the desktop NICs. I have a couple of Intel Pro 1000 PTs for my ESXi boxen that I got for $40 a piece, and they work great.

We’ll have to see how many more drivers ESXi 5.0 support, but chances are you’ll need to pick up an Intel Pro 1000 NIC if you’re using 4.x with desktop hardware.

Standard SATA drives work fine in ESXi 4 and 5. For a desktop system, it doesn’t make much sense usually to try to put a SAS drive, but it’s your money.

Related posts: