There is a lot of talk about the Cloud, but what really makes the whole cloud possible is the hypervisor. The hypervisor is what separates the "workload" from physical hardware. This is what allows us to move the workload around, from hardware to hardware. The hypervisor has been around since 1965 with IBM, but only recently has the adoption become exponential. And with the visibility of the Cloud and discussions about Virtual Desktop Initiatives/Infrastructure (VDI) there is now more emphasis on the hypervisor, and the "two types."
There are two kinds of hypervisors that get discussed, type 1 and type 2. I am not going to argue micro-kernel v. macro. I am going to accept the current notions about "type 1" and "type 2" mostly because I think it makes some sense and I can see the distinction. Each type has it's advantages and disadvantages.
A "type 1" hypervisor is installed directly on bare-metal hardware, it doesn't require an additional OS, it is the OS, even it it is a light or minimal OS. There are a number of "type 1" hypervisors, let's mention a few:
- VMware ESX and ESXi
- Citrix Xen Server
- Linux KVM
- Microsoft Hyper-V. I put Hyper-V last, because if you aren't particular about how you install Windows 2008 you can end up with the Full 2008 OS. This defeats the purpose of having a Type 1 hypervisor.
- MokaFive -- this is a desktop hypervisor for Bare-Metal install.
- XenClient -- this is also a desktop hypervisor for Bare-Metal installs.
There are a number of distinct advantages with "type 1." I've mentioned the first already, installing on Bare-Metal. This is both a feature and a benefit for this type.
- Installs directly on Bare-Metal Hardware, no layer between the hypervisor and the HW, the hypervisor has direct access to the HW.
- System is thin, optimized to have a minimal footprint. This allows us to give most of the physical resources to the hosted Guest (we call these Virtual Machines, VMs for short).
- Decreased Security attack vectors, the system is harder to compromise.
- Provides a standard container to the VM.
- Great for testing and building a lab environment with minimal HW.
- Supports Multiple VMs on the same Hardware. The number of VMs on a Host is referred to as density. This allows higher density hardware.
- Hardware Failures do not affect the OS in the container. This is not to say that a host failure won't affect a VM, it can cause a VM to poweroff or crash. A host drive failure has the possibility to corrupt the underlying file(s) that contains the VM and it's data. There are mechanisms to protect against this.
- Really, Really large VMs are not supported. At this point I would mean massive. VMware now supports 1000GB (1TB) RAM and 32 CPUs in a VM, so you have to be really big to not be able to run on a type 1 hypervisor
- Need a particular HW component. Video Cards come to mind, as do Fax & Imaging boards.
- Strict HW requirements, you can't just build any server or physical system and think it will run a Type 1 hypervisor. You will have to check your hypervisor to make sure that the HW is supported. Somebody may get it working -- that doesn't mean you should do it. This is also true of the desktop hypervisors. XenClient won't install on just any laptop, it is very particular.
- Cost - in almost all of these hypervisors there is a license or support cost.
- Really bad console interface. Not so great for Demos from a single system.
- The container is only for intel OSs. Solaris Sparc is out. AIX is out, though people will say that LPARs are a hypervisor, this is really about hardware partitioning. HPUX is out. Same argument. I am omitting all of the High-End Unix systems that do partitioning and focusing on hypervisors. Sorry, this means intel only.
- Can't run every OS in a hypervisor. See the point above, but Solaris 8 for intel doesn't work. Mac OS is No-Go. Yes, I know it can be done, and I know about the new agreement. Apple won't support you doing this, except with special versions. NT 3.5, BEOS, Windows ME, yada, yada. There are just some that shouldn't and some that I would say Why?
The type 2 hypervisor is different. This is more of an application installed on an operating system and not directly on the bare-metal. There are a lot of these. For instance:
- VMware Fusion & Player
- VMware Workstation
- VMware Server
- Microsoft Virtual PC
- Virtual Box
- Linux KVM
- Citrix Receiver
- Run on a greater array of HW because the underlying Host OS is controlling HW access.
- Easy user interface.
- Ease of Access, this can be run in Window on an OS.
- Great for Testing and Lab development.
- Allows multiple OS inside of a single OS. This is particularly useful for Demos and Support organizations. I don't have to give you demo-ware, I can give you a fully functioning and self-contained environment with all of the required architecture.
- Creates a different Management Paradigm for a VM or Desktops that can be easier. This is especially useful for Enterprises, as it allows the consolidation for the Desktop into a Single image supporting one set of HW -- the VM container.
- BYOH -- Bring Your Own Hardware. It doesn't have to provide HW to every user. A user can run Corporate Software on their own hardware.
- Data can be secured on the desktop.
- Decreased Security, some systems allow a user to interact with the VM container, and in some cases can copy the contain for use elsewhere. This must be well understood and architected to maintain a high security posture.
- Larger overall footprint
- Type 2 Hypervisor must be installed in the host OS. This is usually straight forward. Citrix even uses a Java Version to support more OS & Devices but it still must be installed for each Host OS.
- Loss of Centralized Management.
- Lower VM Density. Cannot support as many VMs are the first type.