Hyper-V, not in my datacenter (part 1 – Hardware)

When posting on my blog I try to stay as objective as I can. Although I’m a big VMware fan I try to look at other products with an open mind and not be biased, I found myself in doubt when creating a presentation in which I was comparing Hyper-V to VMware ESX. I wasn’t questioning myself whether I should or should not be objective, the problem was that I had trouble believing that the presentation I had created was an objective view of things.

Browsing through my presentation multiple times, I was convinced that what I’ve written about Hyper-V and ESX is an objective view on how things are at the moment, but still it looks like the only thing I’m doing is Windows-bashing. I decided to dedicate a blog post to it, so everyone can find out for them self if my points are valid. The big question of my presentation is: “Which is better for my datacenter, Hyper-V or ESX?”.
I’m looking at both hypervisors to see which features they have that would make them suited for running in the datacenter discounting nice features that I would rarely use. Here we go…….

Deploying the hypervisor

The first question of course is: “On which hardware can my hypervisor run?”. Now, Microsoft will definitely tell you that they are hardware independent. If Windows runs on it, you can run Hyper-V on it and we all know that Windows runs on almost anything. I don’t think this statement holds for the datacenter for a number of reasons.
1. I’m surprised that Microsoft is playing this card so often, they make it look like a unique selling point, a strong selling point that they have no HCL and that VMware has only a very limited HCL. Well the datacenters I’ve visited are running HP, IBM, Fujitsu-Siemens and Dell. Maybe some other brands, but it is clear these are the big players and when looking at the VMware HCL, you can see that they are all on it.
VMware’s HCL is not small at all, have a look for it yourself: VMware HCL for ESX 3.5. You’ll find 35 brands of systems that are supported and I guess at least 400 specific systems in total. You’ll have a hard time finding a top quality server that is not on the HCL.
2. When connecting your hypervisor to the network, you will probably want the connections to be redundant and do a little load-balancing. With Hyper-V you are now running into a small problem when wanting to pick just any network card you have lying around, because they will probably not support VLAN trunks and Teaming in the driver. Actually, you will have trouble finding many nics that do support these options, so you will again endup with the high-end nics you were already using to team the nics of your physical server.

In fact, I think you will come up with a smaller number of supported nics for Hyper-V, because ESX does the VLAN trunking and teaming independent of any drivers. In ESX you can easily create a virtual switch that has an HP, Intel, Broadcom and whatever nic combined and still do VLAN trunking and teaming. Have a look at the VMware I/O HCL and learn which nics are supported. Please try to find as many nics for Hyper-V.

3. Another point about the difference in hardware support I would like to point out is support and stability. Looking at a “normal” server these days, you will not be surprised that a hypervisor has to run about 30 VMs or more at the same time. Now, do you really want to put just any hardware underneath those VMs? We all know about the issues drivers have caused in Windows, even if those drivers where in the MS certified program. And do you think that if a driver throws errors at you, that Microsoft is going to solve this issue for you? Come on, get real. With VMware if there are problems with a driver, I call VMware and they have to solve it for me. Of course, there will sometimes be problems with drivers in ESX, because it is software and software will always have bugs. But I do know that these drivers have been tested thoroughly and have been written with one thing in mind: virtualization and performance.

My conclusion, on the hypervisor hardware is that the VMware HCL is not limiting me in choosing my hardware at all. In fact, choosing a system that is on the VMware HCL makes me more confident that I have reliable hardware that will perform without issues and I can’t get this with Hyper-V.

Thank you Alan Renouf for checking this post before publishing.

Series:
Hyper-V, not in my datacenter (part 1: Hardware)
Hyper-V, not in my datacenter (part 2: Guest OS and Memory overcommit)
Hyper-V, not in my datacenter (part 3: Motions and storage)

21 thoughts on “Hyper-V, not in my datacenter (part 1 – Hardware)

  1. Great post! I think it is a good thing having a limited HCL because of the great support. Cant tell you how many company’s want to get rid of whitebox pc’s and servers because the system engineers have a day time job on installing drivers or troubleshoot problems which are only occuring at only one pc or server. They all want to buy 1 similair branded pc or server for easier deployment or support. The same goes with the “small” HCL of vmware. The smaller the list, less problems you have!

  2. You’d be surprised the number of times I’ve heard the argument “ESX/ESXi stinks because it doesn’t run on the Gateway PIII 550MHz desktop PC we use as a server”. If you want to run your business on that, that’s your business. As far as I’m concerned, you and Hyper-V were made for each other. Good luck.

    However, I will submit to the fact that there is a lot of slightly older server class hardware that is certified for ESX, but is missing from the ESXi HCL. Take a look at my post here: http://www.boche.net/blog/?p=451

    No big deal. ESXi is getting there.

  3. With the current feature set MS is clearly going after the SMB market with a focus on S. So a huge HCL will help them.

    Great post by the way,

  4. @Jason: The people who complain about not being able to run on old hardware, won’t be able to run Hyper-V on it either because of the 64bit and VT-extension requirement.

  5. Agreed, excellent post; not an area many people are focused on right now so it’s great to call it out. And thank you for addressing the issue with NICs, probably the most “taken for granted” component in a physical host. But with VMware’s DVS capabilities coming out, and those will support VLAN trunking across hosts, then this could be a huge differentiator for VMware (a good one).

    I completely agree with your comments about VT-ext; MS definitely has a minimum system requirement for Hyper-V, they’re just not going through the effort to certify specific hardware to create an HCL. My guess is that this is more of a business decision rather than a technical one; I’m guessing they don’t want to alienate any hardware platform vendors by only certifying with the big platform names.

    Looking forward to Part 2. :)

    -Alan

  6. This is a very informative, well put comparison of Hyper-V and VMware.
    Hyper-V is a 1.0 product and it lacks a lot of the features that come with maturity, but surely Microsoft has enough resources to have come out of the gates with a product as good as, or better than, the incumbent leader (VMware). It seems they really dropped the ball with Hyper-V.

  7. Hyper-V now at my datacentre. It is very stable and very simple to maintance for me. And it is free for me. Good luck.

  8. Hyper-V now at my datacentre. It is very stable and very simple to maintance for me. And it is free for me. Good luck.

  9. You mentioned a limited number of nics supported with Hyper-V? I'd like to see where you got that from? Actual more nics are supported on hyper-v than on vmware. NOW before you get upset over this continue reading, this is because Hyper-V does NOT SUPPORT NIC TEAMING! Hyper-V requires the hardware vendor of the nic to provide the software to do the teaming or failover or loadbalancing.. Vmware doesn't require this but does it through vmware.. Hence the reason they have a smaller list of compatible nics.. Not the other way around as you state it.. I think this is a HUGE HUGE HUGE shortcoming of hyper-v that Vmware should really hammer on but they don't.. I've called microsoft to task on this on their blogs on numerous occasions. I've spoken to members of microsofts hyper-v team and told them they need to fix this but they don't care…
    Another point, microsoft does have a hardware compatibility list.. (HAL) and you can check it over to see what is compatible and what is not. But this is really not a point to choose microsoft or vmware over the other. Both run on most hardware.. (If they didn't they wouldn't be very good now would they?)
    Thats my two cents worth.. otherwise your article is very interesting..

    thanks for taking time to read ALL of my comments.. ;-)

  10. You mentioned a limited number of nics supported with Hyper-V? I'd like to see where you got that from? Actual more nics are supported on hyper-v than on vmware. NOW before you get upset over this continue reading, this is because Hyper-V does NOT SUPPORT NIC TEAMING! Hyper-V requires the hardware vendor of the nic to provide the software to do the teaming or failover or loadbalancing.. Vmware doesn't require this but does it through vmware.. Hence the reason they have a smaller list of compatible nics.. Not the other way around as you state it.. I think this is a HUGE HUGE HUGE shortcoming of hyper-v that Vmware should really hammer on but they don't.. I've called microsoft to task on this on their blogs on numerous occasions. I've spoken to members of microsofts hyper-v team and told them they need to fix this but they don't care…
    Another point, microsoft does have a hardware compatibility list.. (HAL) and you can check it over to see what is compatible and what is not. But this is really not a point to choose microsoft or vmware over the other. Both run on most hardware.. (If they didn't they wouldn't be very good now would they?)
    Thats my two cents worth.. otherwise your article is very interesting..

    thanks for taking time to read ALL of my comments.. ;-)

  11. Hi
    Thanks for posting your comment. Let me first say that this Hyper-V series is writen more than a year ago and is talking about Hyper-V version 1.

    When talking about a limited list of NICs for Hyper-V, I'm talking about usage in datacenter. In a datacenter I want VLAN support and Teaming in some form. This teaming doesn't have to be etherchannel, but can be any form of teaming, as long as failure of a nic is picked up immediately.

    Now, take your list of NICs that you think Hyper-V supports (and does) and strike all the NICs of which the vendor has not yet written a driver that supports VLANs AND supports teaming. When talking about this with Microsoft at the time of writing, there was a list of 12 (twelve) nics that could offer this support.

    That is how I came to my conclusion that although the list of 'supported' NICs seems to be much longer for Hyper-V, it turns out the be much smaller when looking at real life usage in the datacenter.

    Regards
    Gabrie

  12. Hi
    Thanks for posting your comment. Let me first say that this Hyper-V series is writen more than a year ago and is talking about Hyper-V version 1.

    When talking about a limited list of NICs for Hyper-V, I'm talking about usage in datacenter. In a datacenter I want VLAN support and Teaming in some form. This teaming doesn't have to be etherchannel, but can be any form of teaming, as long as failure of a nic is picked up immediately.

    Now, take your list of NICs that you think Hyper-V supports (and does) and strike all the NICs of which the vendor has not yet written a driver that supports VLANs AND supports teaming. When talking about this with Microsoft at the time of writing, there was a list of 12 (twelve) nics that could offer this support.

    That is how I came to my conclusion that although the list of 'supported' NICs seems to be much longer for Hyper-V, it turns out the be much smaller when looking at real life usage in the datacenter.

    Regards
    Gabrie

Comments are closed.