There are many virtual machine software out there. Even some are included into mainline Linux.
What differentiates them, and what criteria should one ponder while choosing one?
There are many virtual machine software out there. Even some are included into mainline Linux.
What differentiates them, and what criteria should one ponder while choosing one?
The more professional tools, e.g. VirtualBox (aka xVM), and VMWare support multiple snapshots, and "thin" cloning.
VMWare additionally has some limited 3D acceleration, integration with Eclipse debugging (lets you debug applications inside the VM from outside the VM), virtual networking features, multimonitor support, and record/replay functionality (useful for debugging concurrency problems -- records on what core and in what sequence any instructions execute).
VirtualBox and the Kernel Virtual Machine are much easier to get working because you don't have to compile a kernel driver. They're also both free, while VMWare costs ~$190. The KVM is completely open source, while only some parts of VirtualBox are.
In my experience, performance in the various virtualization tools has been about the same everywhere, with a slight edge to VirtualBox and VMWare in comparison to the KVM. Your mileage may vary depending on your hardware and system configuration.
Finally, if you're thinking of virtualizing a bunch of servers, consider a hypervisor based OS as your base rather than Linux, such as VMWare's ESXi, Microsoft's HyperV, or Xen. These generally offer things like live migration, memory sharing, and fault tolerance, and have less overhead. However, they generally can't be used from the computer on which they're running, instead relying on remote managment tools, such as vSphere, or System Center. All three of these hypervisors are free. VMWare charges some $3k to access all the nice features though. Microsoft's and Xen's solutions are fully featured in the free product. (Microsoft's solution doens't work very well unless you have Active Directory already deployed though)
Here are a few obvious criteria I consider, from a lay standpoint. Disclaimer: I don't really know anything about virtual machines, I just use them. I have some experience using Xen, VirtualBox, Linux Containers, VMware, and Linux-Vserver. VMware is proprietary, the others are free software. All of these require special kernel support. Linux Containers
is the only one that is supported by the mainline kernel.
Per Joe Brockmeier's helpful article there are roughly two different of virtualization. One where there is a full fledged guest OS running on top of your host OS. This is called hypervisor virtualization
. If the guest is also Unix, this means the guest is running a different Unix kernel from the host. This is the case even if they are both Linux or even the same distribution.
The second is where the guest virtual machine is using the same kernel as the host OS. This is called container-based virtualization
. In what follows, I'll call these hypervisors
and containers
for short.
Of the ones I mentioned above, Xen, VirtualBox, and VMWare are hypervisors
, Linux Containers and Linux-Vserver are containers
.
Obviously, there are trade-offs in this. The main difference is that for hypervisors
, you can run any OS that supports the hardware, i.e. you can run Windows in top of Linux. For containers
, the host kernel, which is the only "OS" in the picture really, is doing isolation and resource management. In this case, the virtual machine is a sort of artificial construct, and the abstraction can be leaky, in the sense that the virtualization can be less than perfect, which can cause annoying problems. Eg. you may find that running a service on port x in your virtual machine interferes with something being run on port x in the host.
So, some specific issues to consider, referring back to these two types as possible.
First (assuming you are running on Linux, as the question says) is your desired guest a Linux distribution or some other OS? If the former, containers
are an option, otherwise it is not. Even if it is an option, it may or may not be realizable in practice, depending on what support/recipes the virtualization software maintainers are offering for creating guest OS's for different distributions. Unless you want to try to roll your own, that is. In theory it is generally possible, since the Linux kernel is only very weakly coupled to the surrounding use-rspace. So pretty much any "recent" kernel will work with any distribution.
Does the software in question require the kernel to be patched or is it natively supported by the kernel? If it requires patching, that is a major downside, as it means that the patches have to be kept in sync with the changing kernel. If the functionality is already present in the kernel, that this is not an issue. Of course, some virtual machines don't require kernel support and run in userspace. There are relatively few virtualization machines that require special kernel support and have that support in the mainline kernel. The only two I know of are Linux Containers (container
), and KVM (hypervisor
).
The size of the user community. This is mostly an issue when it requires a patched kernel. In that case, the manpower to keep this project going outside the kernel, and make sure everything keeps working correctly may not exist.
Is the virtual machine software proprietary or free software? If it is proprietary it will never be included in the Linux kernel. Of course, depending on your philosophy, you may find one preferable over the other for ideological reasons.
Virtualization performance. How fast is it? Generally containers
are faster than hypervisors
, though the difference may not be significant.
Does it allow memory to be shared with other processes on the system, or does it require dedicated memory? In my opinion, allowing memory to be shared is nice because it means you don't have to chop this memory off from the rest of the system. If you can share memory, you can run more virtual machines than if you can't. By definition, hypervisors
do not allow memory to be shared, because they are separate OS. In general, containers
allows shared memory, again, because it is the same OS, though you can usually also set memory limits. It may also be possible to allocate a dedicated block of memory, though I've never tried to do that.
The level of isolation from the rest of the system. Some virtual machines differ in this. Ideally you don't want the virtual machine knowing about or interacting with the host system or other guest systems in any way. By definition, hypervisors
are completely isolated. The level of isolation for containers
vary depending on the implementation. Implementations vary quite considerably in this respect.