I bet many of you that are virtualizing Citrix XenApp, have read the Project VRC whitepapers and want to implement their best practices. Especially in small environments however, it can be a headache creating the design. My biggest headache was and is, is the way I can guarantee no CPU over commit for the pCPU’s XenApp is running on.
Chapter 6.2 “vCPU overcommit & dedicated hardware” reads:
“Another important best practice, which is valid for all tested hypervisors, is not to overcommit the total amount of vCPU’s in relationship to the available logical processors on the system. For example, on a system with eight logical processors, no more than eight vCPU’s should be assigned in total to all VM’s running on this host.
This is only important when the primary goals to maximize user density. Various tests in phase 1 of Project VRC have proven that overcommitting vCPU’s negatively affects performance. This is not completely surprising since multiple VM’s now have to share individual logical processors, which will create additional overhead.
As a result, it is also recommended to use dedicated server hardware for Terminal Server and XenApp workloads, so it is easier to control VM configuration and assign vCPU’s in relationship to the available processors.”
Implementing this in a large scale environment shouldn’t be too difficult, you take a separate cluster, label it “XenApp” and add some hosts. In a smaller environment you might not have these dedicated resources. How to prevent that CPU over-commit when mixing XenApp and normal VMs? I have to take HA spare capacity in the equation and for the other VMs I probably want DRS enabled.
Resource pools? Hmm, not really. The shares defined in a resource pool only kick in when there is CPU contention, but they won’t prevent multiple VMs running on the same logical CPU. The limit you can enforce in a resource pool neither will save one core for a VM. If I would let DRS move VMs over multiple hosts, I can’t keep the XenApp VMs bound to one host. Excluding the XenApp VMs for DRS, won’t prevent other VMs moving to the same hosts. Unless you’re going to create anti-affinity rules, but that would make so many rules, DRS won’t be happy about it and managing them is a pain too.
Setting CPU affinity? That has never been anybody’s best practice I hope.
So, what are your ideas on this? Let me know in the comment section.
“Resource pools? Hmm, not really. Those resource pools only kick in when there is CPU contention”
Resource Pools don’t only kick in when there is contention. You are talking about shares. Resource Pools are Reservations and Limits as well.
I wouldn’t use CPU Affinity either indeed cause that removes all flexibility.
I think what VRC is talking about is having 1 vCPU per CORE in your cluster. So a Host with 8 cores and HT enabled would run 16 vCPUs.
I would enable DRS though, just in case someone is hogging your CPU resources.
In my opinion the key sentence is “This is only important when the primary goals to maximize user density”. In smaller environments (say 50 to 200 users) I find that usually is not an issue.
I usually work with environments that are more or less oversized because of HA capacity. In these clusters I know I can have a “soft” resource guarantee to the XenApp/TS servers without actually making it a “hard” guarantee.
In growing infrastructures this puts more emphasis on resource management but for me that is easier to implement and explain to admins (through alarms/checklists etc.) than stuff like DRS exclusions etc. which can be difficult to understand for admins who are relatively new to virtualization and/or are not going to be administering the vSphere infrastructure on a daily basis simply due to manpower constraints (small business –> only a couple of admins who handle all support from service desk to Exchange).
If you find a solution let us know :)
That’s the kind of granularity you can’t find yet in vSphere/vCenter. DRS would be the best place but it lacks setup of complex rules…
Enterprise version of Citrix PS or XenApp can help you tame cpu per user base and as usual it works only when contention shows up (which is not bad at all).
Cheers,
Didier
Sub-clusters in 4.1 might help, you could set soft preferences to keep your other VM's off you XenApp hosts and then they would only intrude if there were no other HA slots available after losing a host.
“Excluding the XenApp VMs for DRS, won’t prevent other VMs moving to the same hosts. Unless you’re going to create anti-affinity rules, but that would make so many rules, DRS won’t be happy about it and managing them is a pain too.”
With vSphere 4.1 out now you can use VM-Host affinity rules.
Kris
Hi – I am wondering – why wouldn’t you set a reservation on the VM’s that equals the equivalent of pCore * #vCPU’s – this would ensure the VM was guaranteed access to resources – other VM’s moved to a host running these workloads can’t impact them and ensure consistent user experience. DRS still functions, HA will need to be considered, not CPU affinity required.