Matt McSpirit is a Partner technology Advisor, who works with Microsoft Partners in the UK, enabling them to build practices around Microsoft Virtualization and Management Technologies. Matt’s role stretches across Server, Desktop and Application Virtualization, along with the associated System Center Management technologies.
Gabe: Matt, why has Microsoft reached out to some bloggers in the community lately?
Matt: “I personally believe that Microsoft is reaching out to the community to have interesting and engaging conversations with them, to be able to debate about the different aspects of Microsoft virtualization, learn from different views and opinions, and show just what Microsoft Virtualization can do today. It is not about proving that one virtualization platform is better than the other, but more about educating and helping people understand the choices that are out there.”
Gabe: While listening to the chinwag of you with Mike Laverick, I noticed that you aren’t at all negative about VMware, which is in great contrast to what sometimes is shown by the Microsoft marketing department.
(Chinwag with Mike Laverick: http://www.rtfm-ed.co.uk/2010/04/16/chinwag-with-mike…-matt-mcspirit-episode-11/ )
Matt: “When working with partners I would lose my credibility if I simply came in, and said “Microsoft is the best”. A large number of our Partners have built businesses on VMware technologies, so that just wouldn’t wash with them and it would show a lack of respect. Instead of doing hard feature comparisons, I help Partners look at the bigger picture, focusing on capabilities their customers would really benefit from. For example, in an environment that contains SQL, Exchange, SharePoint and Windows Servers, there’s a great opportunity to introduce System Center to manage these aspects, along with their virtualization platform. If however, a customer needs functionality like distributed network capabilities, like that provided by Cisco, then features like the Distributed vSwitch within VMware vSphere Enterprise Plus will be very important to them.”
Gabe: Talking about System Center, it is a key component in a Hyper-V environment.
Matt: “Yes it certainly is. In fact, I would say it is the key component. It’s also a very useful addition in a VMware vCenter environment. System Center can work as a management wrapper and pull a huge amount of data out of your virtualized SQL Servers or Exchange Servers than vCenter can. System Center can be very complimentary to vCenter in that way. Also partners like Veeam and Bridgeways understand the value of System Center as a complimentary system to vCenter, producing Management Packs that integrate VMware technologies, into System Center thus making managing a virtual environment on VMware easier and more centralized.”
Gabe: Before configuring a Hyper-V environment, an assessment should be done. How will the Microsoft Assessment and Planning Tool help in this? In what level can it tell me what the performance is of the physical servers and can I use it to size my new to buy hardware for Hyper-V?
Matt: “The MAP Tool (Microsoft Assessment and Planning tool) is part of a wide range of solution accelerators that help you get going with virtualization. The current version is 4.0 but there is a beta for 5.0 available from the website with new features and capabilities. What used to be the Windows Vista Hardware Assessment Tool, has now evolved into a tool for virtualisation assessment, and much more.
For example it can perform virtual machine inventory of your virtual machines already running on VMware ESX 3.5 or Microsoft Virtual Server and assist you with guidance to move them to Hyper-V if required. It can help you with application virtualization, through assessment of client machines, servers, and the technology environment for the implementation of Microsoft Application Virtualization. The MAP tool can also perform assessments of Windows 7 Ready Hardware within your organization and present you with several reports showing which systems are and which are not capable of running Windows 7.
The MAP tool is still expanding and growing and vendors like IBM and HP are customizing, and integrating with MAP. IBM has added specific IBM System-X information into the MAP tool, and HP is taking it a step further with the HP Hyper-V R2 Sizer where you can import the XML output from the MAP tool and map this on actual HP hardware and in the end present a bill of materials that you can use to order your new HP hardware.”
Gabe: Comparing Hyper-V to VMware ESX. Is MS still behind on VMware or do you consider Hyper-V to be equal in certain or maybe all fields, technically speaking.
Matt: “That is a very interesting question, since when looking at something like the Project Virtual Reality Check (VRC) study where all three major hypervisors (XenServer, Hyper-V R2, VMware vSphere 4) have been compared to each other in a Terminal Services workload environment. We see a major improvement in performance for Hyper-V R2 compared to Hyper-V R1. We see a lot of improvement in new functions like Live Migration, but there is also a lot of stuff under the covers that not all people will notice, like improved networking performance through things like Jumbo Frames, TCP Offload and Network VM Queues (offloading to NICs), SLAT (Second Level Address Translation) CPU support and Large Memory Page Tables.”
After the call: SLAT is more often referred to as Nested Page tables
Matt: “On disk I/O we also see benchmarks with Hyper-V R2 and a built-in iSCSI initiator on 10GbE running on Intel X5500, firing a 700.000 IOPs compared to a native physical installed Windows 2008 Server doing a million IOPS, done on 512 bytes block size. When you start to increase the block size to more normal block sizes of 4K or 8K we’re seeing native performance in a Hyper-V guest. So the hypervisor isn’t a bottleneck in this. I’m not saying that vSphere or XenServer can’t do this as well; it’s just good to see that Hyper-V is very performant and scalable. The key thing to remember is, the total database operations of eBay is under 200,000 IOPS, so we’re doing ok I think!
In CPU performance the Project VRC clearly shows how Hyper-V is able to benefit from features the Intel X5500 offers, like hyper threading. Also in the field of memory performance we see that we’re on an equal level with others. Of course there is still the difference that in VMware you can do memory over commit which Hyper-V can’t, but from a performance perspective, when both a Hyper-V and VMware VM use all the RAM assigned, Hyper-V does offer the same performance as VMware both using techniques like SLAT. So when choosing a hypervisor for the enterprise it is obvious that VMware is very much like Google, being a brand in itself. When people think virtualization, people often think VMware. That is a difficult position for Microsoft to be in, but what Microsoft is offering now with Hyper-V R2 and System Center is a different set of values than just the hypervisor. You can also see, through VMware acquiring other, not directly-virtualization-focused companies, that they too are looking to be more than just a virtualization-player, but also focus more on the guest operating system. ”
Gabe: With VMware buying SpringSource, Ionix and Zimbra and Microsoft developing Azure, it looks as if we are moving further away from the operating system and moving to applications that can run without an operating system.
Matt: “Windows Azure, has a different kernel than the Windows Server kernel, and integrates tightly with an Azure-Specific hypervisor. The whole point of Azure is to abstract the OS away from the user or the developer. If I’m a SQL guy and I want to run my relational database in the cloud, I don’t want to worry about patching the operating system or the networking load balancing. I just want to put it in the cloud and want to be sure it will be performant and I want to just pay for the resources I use, which is essentially different from a hardware platform where I pay for the hardware whether I use it to the max or not.
With Azure, today, you run your application in the cloud and don’t worry anymore about scalability. Microsoft has recently announced the integration of System Center with Azure, which enables you to scale your application when more users are accessing it, without even having to go to the Azure consoles, but instead, trigger growth and scaling from within Operations Manager 2007 R2.”
Gabe: Taking you back to Hyper-V, tell me about the new Hyper-V R2 SP1 feature called Dynamic Memory. What I’ve read about it is that Hyper-V will now be able to dynamically assign more RAM to a VM when needed and take away RAM by clearing pages, when no longer needed (only for Windows 2008 guests).
Matt: “Fundamentally, Dynamic Memory is a way of optimizing the use of memory within the Virtual Machines. The way it will work is in tandem with the Guest OS, rather than being unaware of it. We’ve announced support for a large number of Microsoft Operating Systems, both clients, and servers, that will integrate with Dynamic Memory. For the admin, you’ll set a Start-Up RAM, and a Maximum, and the VM’s RAM will fluctuate between those values in a way that’s optimal for the Guest OS. It will also allow you to specify priorities and buffers to ensure your VMs get what they need when they need it. What Dynamic Memory won’t let you do, in contrast to Memory Overcommit, is use more RAM than physically exists within the hosts. This way, we avoid the swapping-to-disk situation, which is very detrimental to performance. ”
Gabe: What numbers in RAM savings do you see with beta customers and in lab testing?
Matt: “After the announcement of Dynamic Memory, including what it is and that it is coming, we’ve taking it back inside and right now there is no beta yet that has been given to the public so therefore have no real world savings numbers yet.”
Gabe: During the chinwag with Mike Laverick, you mentioned how “Memory Randomization” (a security feature) available in Vista, Win2K8 and Win7 upset’s VMware’s Transparent Page Sharing. Basically, memory is moved around for security so rapidly – that TPS benefits are reduced.
Matt: “Yes, it can do this. The Address Space Layout Randomization (ASLR) impact on TPS is actually rather low when you compare it to other features for example like SuperFetch. If you’re not familiar with SuperFetch, it is all about pre-loading, pre-caching certain elements of the OS to enable a more responsive and quicker experience for the end-user. When SuperFetch is working it effectively reduces or practically eliminates the amount of zero memory pages and therefore reduces the effectiveness of TPS”
Gabe: Actually, TPS doesn’t work with only empty pages; it is kind of de-duping memory pages. When SuperFetch fills all empty pages, TPS can still de-dupe the memory SuperFetch is using.
Matt: “If you run TPS on XP based set of virtual desktops, you will see quite a lot of memory reclaimed by TPS, compared to the amount of memory reclaimed with Windows 7 desktops. So it must be something that SuperFetch does with the RAM that it can’t be de-duped. But I don’t know the exact technically details there, but Jeff’s blog explains how this works with page sharing, empty pages, etc.”
After the call: I’m thinking there might be a mix up of technical terms here, maybe Matt was thinking of ballooning, because with ballooning empty pages that are no longer needed by the OS are returned to ESX through the VMware tools.
E-mail reply by Matt on this: “Gabrie – No confusion here! TPS can be impacted by ASLR and SuperFetch, as both of these capabilities can reduce (slightly!) the ability of TPS to ‘match’ pages, and thus de-dupe them. I thought the Ballooning on VMware was more to do with Memory Overcommit, and its ability to free up that RAM for another VM to use.”
Gabe: Can the ASLR functionality be turned off via the registry for virtual desktops?
Matt: “It is enabled by default and in Project VRC they have used a registry key to disable ASLR and showed a slight performance difference, but you should really question if turning of a security feature to gain very few performance is the right thing to do.
The biggest thing that impacts the whole memory side of things is more around things like large page tables which Hyper-V supports by default. The thing with large page tables, when you’re trying to de-dupe memory with TPS, is that it will look for blocks that are the same. Now SuperFetch and ASLR will have an impact on that, but when you move these to one side and think about a memory page that is 4K or a memory page of 2MB, it is much harder to match….”
After the call: During the call I interrupted Matt to discuss if that was a correct statement since I thought vSphere was able to break large pages into small pages and still reach high level of TPS. After the call I did some reading and got all the facts straight:
- TPS has nothing to do with empty pages. It looks for identical 4K memory pages inside the hosts memory and uses an index to only write them once into memory. This works for empty pages as well as pages with data in it.
- SuperFetch has no positive or negative influence on this, since it is just memory to ESX.
- The fact of ASLR moving memory pages very quickly, could put a strain on the TPS mechanism in having to more often update the indexes, Project VRC showed that the performance difference is negligible. And since the page itself doesn’t change, only the location of the page, it does not influence the amount of memory saved by TPS.
- vSphere is able to break 2MB pages into 4K pages and then do TPS, but it will not do this by default. vSphere will not use TPS until there is a constraint on memory. Only then vSphere will start breaking the 2MB pages into 4K pages.
Gabe: Looking at the time this call is taking already, I just want to ask a last question: What’s next? Hyper-V R3 or Windows 2011?
Matt: “Good question, I’d love to know and also I’d love to be able to tell you, but unfortunately anything beyond SP1 is only known inside the product group. Obviously the product team will already be working on the next version of Server which will be a major release I’m sure, but other than that I don’t know. If you look at MMS2010 keynote, you will see three core demos in there.
One of them is the management integration between System Center Operations Manager with technologies like Opalis Integration Server which enables, among other things, run-book automation between different technologies. It’s very clever indeed. One of the other demos is a Live Migration over long distance, through integration with HP Cluster Extensions (CLX) on top of a Multi-Site Cluster. The big demo that I thought was really cool was System Center Virtual Machine Manager vNext.
So like I said before, Hyper-V, like ESX, is a fantastic platform, but the coolest bits get enabled by System Center but not just one System Center component. When you bring together System Center as a suite, it provided the greatest value, but also a depth and granularity to management that’s hard to match.”
Gabe: Thank you very much for this chat Matt.
Matt’s blog: VirtualBoy and VirtualBoyTV
And ofcourse Matt is on twitter too: http://twitter.com/mattmcspirit
After the call: I received a number of usefull links from Matt regarding MAP, Disk I/O, Dynamic Memory, Clustered File System, VHD Performance and Azure. You can find these below.
MAP
http://technet.microsoft.com/en-us/solutionaccelerators/dd537566.aspx <-MAP Toolkit
I/O
http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
Dynamic Memory
http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx
CSV
http://blogs.netapp.com/msenviro/2009/10/hyper-v-storage-provisioning-part-one.html
http://blogs.netapp.com/msenviro/2009/10/hyper-v-storage-provisioning-part-two.html
http://blogs.netapp.com/msenviro/2009/10/hyper-v-storage-provisioning-part-three.html
VHD Performance Comparisons
Azure