[Gestalt] vBlock, great product, just not for you

On Friday the Field Tech Day delegates of the Gestalt IT event paid a visit to Cisco where they were treated on a very good session on the VCE Vblock. This session was brought to us by “the other” Scott Lowe and Ed Saipetch. Apart from doing a very good presentation they also showed they could fight like lions against the comments of the delegates, resulting in the best session of this Tech Field Day – Boston 2010. Although Vblock is a great piece of machinery the feeling amongst the delegates was almost unanimous about Vblock being very hard to sell and offer little extra value over a self-built configuration using the same components. Writing this blog post took me quite some time reading several guides to better understand the Vblock and during this investigation I changed my mind on whether Vblock is a good or bad idea several times. I hope the following helps you make up your mind.

 

Vblock, what is it?

Let me explain a bit more on what the Vblock really is, actually it’s fairly simple: the Vblock is a complete Virtual Infrastructure package built on EMC Clariion CX4 series or the EMC Symmetrix V-Max for the storage layer, connected over Cisco Nexus 1000V and Cisco Multilayer Directional Switches (MDS) to a Cisco Unified Computing Systems (UCS) blade center running VMware vSphere 4 . By using a fixed combination of components VCE (a consortium of VMware, Cisco and EMC) is able to guarantee performance, capacity and availability SLA for a known number of virtual machines.

(The Cisco Nexus 7000 in the diagram is not a Vblock component. EMC Ionix is optional and available at additional cost.)

The unique selling points of a Vblock according to VCE are:

  • Pretested
  • Fully Integrated
  • Ready to Go
  • Ready to Grow

 

Vblock Type 1 and Vblock type 2

A Vblock comes in two flavors, type 1 and type 2. Scott Lowe did mention that a type 0 is being constructed at this moment but specs have not been made available yet. When asking my good friend Google, he (or she) told me to expect the smaller type 0 in summer 2010, but that is all unconfirmed info.

A type 1 Vblock will be able to host up to a 1000 VMs and a type 2 Vblock will host up to 2000 VMs. If you hit the limits of a Vblock you just extend your Vblock with another Vblock, which can be of any type (1 or 2). All these Vblocks together can be managed as one. When deciding on what size of Vblock you need it is important to NOT think about RAM, CPU cycles or IOPS needed, but only think of number of VMs you want to run (more not this later) and buy Vblocks accordingly.

 

No upgrades

Now I do see your head frown on “Buy another Vblock if I hit the limits? Can’t I just upgrade a Vblock with more memory for example?” Well, technically you can; you could add more blades or add more memory to your blades but then it isn’t a Vblock anymore and you lose your one point of support. You don’t lose all support of course, since each component still has full support by either EMC, Cisco or VMware, but the VCE consortium just won’t be able to support you anymore. There are only very limited changes you’re allowed to make to the system to stay within the supported configuration.

This wouldn’t be that much of an issue if resources would have been used to their max, but the maximum values for the UCS Vblock blades are not even near their hardware limits when looking at the Vblock 1 config. Have a look at the exact specs according to the “Vblock Infrastructure Packages Reference Architecture” guide:

  # of VMs based on minimum UCS configuration (16 blades) # of VMs based on maximum UCS configuration (32 blades)
Vblock 1 2 chassis, each 6 blades of 48 GB RAM per blade plus 2 blades of 96 GB RAM per blade 4 chassis, each 6 blades of 48 GB RAM per blade plus 2 blades of 96 GB RAM per blade
1:4 core to VM ratio (1920 MB memory per VM) 512 1024
1:16 core to VM ratio ( 480MB memory per VM) 2048 4096
Total RAM 480 GB RAM 1920 GB RAM
     
Vblock2 4 chassis, each 8 blades with 96 GB RAM per blade 8 chassis, each 8 blades with 96 GB RAM per blade
1:4 core to VM ratio (1920 MB memory per VM) 1024 2048
1:16 core to VM ratio ( 480MB memory per VM) 4096 8192
Total RAM 3072 GB RAM 7144 GB RAM

 

A UCS B-200 M1 blade can hold 96GB RAM according to the specs but will only be filled to half their maximum possible configuration in a type 1 Vblock which always uses 6 blades per chassis at only half of their possible configuration max plus 2 blades maxed out at 96 GB RAM. Why not max out those first 6 blades as well? If I start with the minimum config of 2 chassis of each 8 blades (6+2), a ratio of 1:4 VMs per core, would max out at 512 VMs (32 VMs per blade). Now when I go over those 512 VMs, according to the Vblock principle I would need to add another chassis. Such a chassis would then give me 256 VMs extra. However, when working with 96GB blades instead of the 48GB blades, I could run up to 768 VMs on the first to chassis but this is no longer a supported configuration.

 

The balance

This is where the balanced design of the Vblock comes into play. According VCE the supported configurations guarantee there is always a good balance between CPU, RAM and IOPS. An increase RAM will enable you to run more VMs but will also ask for more CPU cycles and demand more IOPS from your storage system. With a Vblock each type or combination of types will always make sure this balance remains intact. Sounds good. The Vblock’s fixed configuration will protect you from creating a bottleneck when changing a configuration.

 

The bottleneck

What I don’t understand though is where the bottleneck is in a Vblock type 1 to use only 48GB? When starting with 2 chassis there is plenty of memory that could be added before adding a 3rd chassis. CPU shouldn’t be the problem, since the Vblock type 2 blades are the same B-200 blades, all running 96GB RAM and are able to host more VMs per blade than the Vblock type 1.  Would storage be the bottleneck? Actually, I doubt that, since adding a 3rd or 4th chassis would put more VMs on the storage and ask more IOPS from the storage, which the Vblock can deliver according to the specs. Then why would the balance be gone when adding more memory? I have no answer on that, I can only say that where 4 chassis with each 6x 48GB + 2x 96GB blades will give me 1920GB RAM, a non-supported config with 3 chassis of 8x 96GB blades would give me 2304GB RAM and thus save me buying that 4rd chassis.

From a VMware view, there is the question on how the vCenter cluster design will be made, will a two chassis configuration span one cluster or will each chassis be a cluster of its own? Both scenarios have their potential design problems. To learn more on this read Duncan Epping’s “HA Deep dive” Section on HA-Admission control. http://www.yellow-bricks.com/vmware-high-availability-deepdiv/#HA-admission.

 

When buying a Rolls, you don’t ask for the price

But maybe I’m too focused on the details. A Vblock can hold a lot of VMs and when buying capacity for that many VMs you don’t care about these details, like when buying a Rolls Royce. If you have to ask for the price, you can’t afford one. I’m convinced that the Vblock in both type 1 and type 2 is a carefully selected configuration which is able to deliver really great performance, but it is just not for us mortals to buy. The Vblock will be bought by CEO’s of big companies during dinner with the sales people from VCE, where all they discuss is how many VMs they want to run. The sales man from VCE then says: “Sure, 15.000 VMs is no problem for us, just sign on the dotted line to order 2 Vblocks type 2. Now, what’s for desert?”

Disclaimer: My trip, hotel and food during this event is paid for by the sponsors of the event. However, I’m not obliged to blog about it or only write positive posts

Links to other Tech Field Day posts on the Vblock: