StorMagic SvSAN with High Availability mirroring

Recently I had the opportunity to review StorMagic’s SvSAN software. Their current release available from their website is only compatible with VMware ESX 3.5, since my home lab is already running VMware vSphere I received the latest beta 4.1.913 which does work with vSphere. In this review I will not write on absolute performance since my home lab will probably hit its limits sooner than the reviewed software does.

What StorMagic does

With StorMagic it is possible to use the local storage (your internal hard disks) of your VMware vSphere hosts as shared iSCSI storage. Other hosts can connect to the targets as if it was a normal iSCSI NAS or SAN. With the free StorMagic version, you can have up to 2Tb of storage managed by StorMagic. Licenses for 4 Tb, 8 Tb and unlimited storage are also available. The advantage of sharing your local storage is that your vSphere environment can now use the extra features like VMware High Availability, VMware DRS and VMware VMotion and you don’t have to spend money on buying a separate iSCSI NAS or SAN.

A disadvantage of having your shared storage on one vSphere host is that this vSphere host will now become your most important part in your Virtual Infrastructure and downtime of the host, is downtime of your storage and therefore downtime for all the VMs on this storage. Features like VMotion and HA are meant to reduce the downtime in your environment and it wouldn’t be wise to now increase your downtime (risk) by putting all your VMs on shared storage that is kept up by just one host.

To reduce this risk StorMagic offers an extra license called StorMagic SvSAN High Availability. With this license the shared storage can be mirrored to a second host. Should one of the hosts fail, the other host will immediately take over and your VMs that are running on the shared storage will keep on running. Well, most of them, because the VMs running in memory of the failing host will of course crash, but VMware HA will be able to get them up and running from the shared storage.

How a virtual infrastructure can be built with StorMagic

In the following figures is shown how local storage is used to create shared iSCSI storage. In the first figure you see the VMware vSphere host which has 4 local disks attached. The first local disk is used for the installation of vSphere, which by default (new in vSphere) creates a local vmfs volume on which the COS (Service Console) is installed. This local VMFS will also be used to install the StorMagic SvSAN appliance on, which needs a 256MB virtual disk and a larger 20GB virtual disk. The remaining three disks will be used to create a Raid5 volume that will be used for as shared storage in StorMagic. There can be two choices in how to use this Raid5 volume. If the host has a Raid5 capable controller, the disks can be joined into a Raid5 volume at hardware level and presented as one disk to StorMagic or StorMagic can group the three local disks into a Raid5 volume and will do the Raid calculations.

StorMagic SvSAN local disks

The second figure shows how the shared storage is connected to by the other VMware vSphere hosts in the cluster. Not showing in this figure is that the vSphere host that holds the shared storage can and in most cases will connect to the shared storage in the same way the other hosts do.

StorMagic SvSAN iSCSI target

To make this shared storage redundant a second host will be running StorMagic SvSAN and mirror the shared storage. All vSphere hosts using the shared storage will get a second iSCSI target connection which points to the mirror. Remember, an extra license is needed for StorMagic HA (mirror).

StorMagic SvSAN iSCSI mirrored target

In the virtual infrastructure client, you can clearly see how a vSphere host connects to the iSCSI target over two paths, one to the shared storage on the first host, one to the second host. In case of failure the other path will be chosen. In my lab environment the SvSAN iSCSI Targets have IP address 192.168.1.242 and 192.168.1.243.

StorMagic SvSAN dual iSCSI target paths

Installation of StorMagic

To be short, it’s a breeze. On the host that will run StorMagic an OVF is imported and asks for default settings like IP address, appliance name, etc and then it is already up and running. To create shared storage an iSCSI target has to be created. Each iSCSI Target is built from a storage pool and can be the size of that pool or smaller. You could have a 100GB pool and create a 25GB target from it or even multiple targets out of that pool. A storage pool is made out of devices, which are in essential the local disks. This can be virtual disks (VDMKs) or local physical disks.

When using virtual disks (VMDKs) do keep in mind that this will add another virtualization layer and will lead to some performance loss, therefore it is advised to use disks that are not yet used by vSphere as VMFS stores but use empty disks that only will be used as RDM for the SvSAN.

Essentially creating an iSCSI target is done in three steps. First add the available disks into a device, this can be in a Raid5 config or a simple JBOD (Just a Bunch Of Disks) config. Then create a pool from these devices and as last step create a target out of a pool. When creating a mirrored volume, a special wizard is available to combine a pool from the first host with a pool from the second host into a mirrored iSCSI target. This wizard will create the master iSCSI target on one host, create the mirror on the second host and the best feature of this wizard is that all the iSCSI configuration settings for each vSphere host are done by the wizard. It will create the connectors to the master and the mirrored volume on all hosts in the cluster, not only the hosts that run the SvSAN appliance.

Integration with vCenter

An important part of the StorMagic SvSAN is the integration with vCenter. First there is the StorMagic plug-in that very nicely integrates into your virtual infrastructure client by adding an extra tab called ‘StorMagic’ at host level just next to ‘Storage Views’. By clicking on it all the tools to manage your SvSAN are available and a nice extra is that it is actually just a web page inside the virtual infrastructure client which of course can also be viewed with your favorite browser. I have to admit that although the integration is very nice, I often used the web browser since this is much quicker than the virtual infrastructure client when you have to switch hosts often. Maybe this will change once Windows 7 is officially supported for the virtual infrastructure client and I don’t have to use any hacks to get it running. Definitely not a StorMagic issue.

Secondly there is the ‘Neutral host’ service that is installed on the vCenter Server. This ‘Neutral host’ service is used to determine which SvSAN is still alive and to prevent a split brain scenario in case of network failure. Running this service on the vCenter Server also creates a little ‘gotcha’, because when vCenter is running as a VM using the shared storage offered by SvSAN, it can’t play the role of “Neutral host”. Advice from StorMagic is to not run vCenter in a VM but that to me is politically a big no-no. In my opinion vCenter has to run in a VM on high available storage unless there are really big issues and not doing this just because of a simple service that has to run on the vCenter Server is not enough for me. I discussed this with a StorMagic support engineer and learned that they felt the same and are already looking at a different way of doing this, though no guarantees can be given for the final release.

Failing over to the mirror

When working with techniques like virtual SAN’s an administrator has to really know his stuff. Know which features kick in at what time. For example in my test lab I have three vSphere hosts called esx01, esx02 and esx03. On host esx02 and esx03 the SvSAN appliance is running in a mirrored configuration. On each host there will be one VM running using the shared storage offered by esx02 (mirrored to esx03). What will happen when I pull the power from host esx02?

Without host esx02, there is no SvSAN running anymore on esx02, so the master of the mirror will fail, but the SvSAN on esx03 will take over and keep the shared storage available. Since host esx02 is down, the VM running in esx02 will also be down, but VMware’s HA feature will restart the VM on host esx01 or esx03, using the shared storage on esx03. The VMs on esx01 and esx03 will keep on running, provided the failover to the mirror is handled fast enough by the StorMagic SvSAN. To show how this failover works I posted a video on youtube where I show the configuration and then demo how the VM on esx01 keeps on running, although there is a ‘freeze’ for almost a minute. It feels a bit long but when checking the Windows event logs there is no mention from Windows about losing its disks and the application I had running inside the VM didn’t report any error either.

httpv://www.youtube.com/watch?v=01XLt08hGVw

Features not mentioned

In my review I couldn’t cover all features. Not only because it would make the review way too long but also because I don’t have all the equipment. It is for example possible to connect the SvSAN and the physical SAN’s StorMagic sells (called SM Series) and create mirrors between both these products or manage storage on the physical SAN from within the SvSAN. For a more extensive list of all the features, do visit their website at www.StorMagic.com and get the Product Brief.

Cost and licensing

StorMagic SvSAN comes in 4 editions:

Version Max TB managed Price Support
SvSAN starter max 2 TB free with promo key $ 179 / year
SvSAN max 4 TB $ 1495,- $ 249 / year
SvSAN max 8 TB $ 2995,- $ 449 / year
SvSAN unlimited $ 4795,- $ 699 / year

To make your 2 TB SvSAN Highly Available, the add-on feature HA will cost $ 995 per host. HA is included in the 4, 8 and unlimited TB license.

Conclusion

Working with StorMagic’s SvSAN is very simple and straightforward. When I first laid my hands on it, I had some trouble grasping the concept but with excellent help from a StorMagic Support engineer I quickly knew what I wanted and how to get it done. The software is robust and didn’t fail on me even after really stressing it by pulling the plug many times and have mirrors get out of sync and re-sync and break them again. It all kept running without issues. The integration of the GUI into vCenter makes it very easy to use and leaves very few things to wish for. Everything you would want to do is available from the interface and I saw new features coming with every beta release. I like StorMagic’s SvSAN very much and will certainly recommend it.

Is StorMagic suited for your organization or is it cost effective to use StorMagic SvSAN in your environment? This is something I can’t answer for you and you will have to find out yourself. I suggest you download the free 2TB license and give it a try. Take Excel and make some nice calculations to find out if using local disks plus StorMagic is cheaper than buying a small NAS or SAN which can offer same features of which the mirroring is a very strong selling point.

12 thoughts on “StorMagic SvSAN with High Availability mirroring

  1. Hi Gabe,

    I 'm seaching for a software iSCSI solution and I 've read this interesting review. As I understand SvSAN supports HA, vMotion and DSR. You mentioned VMs running on a host that crashes will be restarted.
    Although this is true for HA with FT, as I understand it with vMotion this shouln't happen?

    With regards,

    Jaap
    Netherlands

    PS.
    I 'm running a testlab on Intel i7 host. Is it correct that FT/HA can't be tested?

  2. Hi Gabe,

    I 'm seaching for a software iSCSI solution and I 've read this interesting review. As I understand SvSAN supports HA, vMotion and DSR. You mentioned VMs running on a host that crashes will be restarted.
    Although this is true for HA with FT, as I understand it with vMotion this shouln't happen?

    With regards,

    Jaap
    Netherlands

    PS.
    I 'm running a testlab on Intel i7 host. Is it correct that FT/HA can't be tested?

  3. @ivobeerens They've released a version that supports vSphere 4.x

    @Jaap – if you read the release notes, you'll see that FT isn't supported.

    I'm implementing this for a client. In production now. Aside from a macro issue with supporting iSCSI targets on top of a vmfs datastore (which all software based storage solutions have), for which VMware is lagging on getting a fix out, we haven't seen any huge issues yet (crossing fingers).

  4. @ivobeerens They've released a version that supports vSphere 4.x

    @Jaap – if you read the release notes, you'll see that FT isn't supported.

    I'm implementing this for a client. In production now. Aside from a macro issue with supporting iSCSI targets on top of a vmfs datastore (which all software based storage solutions have), for which VMware is lagging on getting a fix out, we haven't seen any huge issues yet (crossing fingers).

  5. I have been evaluating SvSan and once you get to know the terminology such as pools, devices, plex's it really is easy to master. I found the mapping of RDM's as the guide suggests does not work with vSphere, well at least with an HP E200 or the P400 controller. The RDM is for performance reasons and to be honest I don't think performance will be a huge consideration for a SMB. The disks and controllers of the typical entry system won't really acheive the performance benefits of an RDM and as long as you don't going put a heavy duty transactional SQL databse the limits won't be reached.

    I have tested my E200 with 128mb Battery Backed Cache witha 15K SAS drive (non-raid) and can achieve about 21mb/s with sequential writes as opposed to a standard entry level system which comes with a 64mb cache without any write cache enabled which acheives only 4-5mb/s. I also tried the P400 with the standard 256mb cache without the 512mb battery backed cache and could sill only achieve 6-7 mb/s. Most people don't think to add the cache upgrade but this has huge implications with performance and should always be on the PO with the reseller.

    The good news is because SvSAn allows vmdk's means the drives benefit from not only the portability aspect but you can just have a single RAID array in your system and make the most efficient use of all your storage. the SvSan would normally sit on a RAID1+0 raid with the service console on typical 72GB drives using just 5 or 10% of the space but using vmdk's you can use all the other 90% for the SvSan Array.

    I also like the feature whereas the blocks have to be written to both sides of the mirror before the OS gets acknowledged that the file has been written to disk, this makes it truly a synchronise storage system. I would definitely use the SvSAN in production and would probably use some kind of archiving/backup strategy such as veeam backup or vRanger in the background to complete the solution.

  6. I have been evaluating SvSan and once you get to know the terminology such as pools, devices, plex's it really is easy to master. I found the mapping of RDM's as the guide suggests does not work with vSphere, well at least with an HP E200 or the P400 controller. The RDM is for performance reasons and to be honest I don't think performance will be a huge consideration for a SMB. The disks and controllers of the typical entry system won't really acheive the performance benefits of an RDM and as long as you don't going put a heavy duty transactional SQL databse the limits won't be reached.

    I have tested my E200 with 128mb Battery Backed Cache witha 15K SAS drive (non-raid) and can achieve about 21mb/s with sequential writes as opposed to a standard entry level system which comes with a 64mb cache without any write cache enabled which acheives only 4-5mb/s. I also tried the P400 with the standard 256mb cache without the 512mb battery backed cache and could sill only achieve 6-7 mb/s. Most people don't think to add the cache upgrade but this has huge implications with performance and should always be on the PO with the reseller.

    The good news is because SvSAn allows vmdk's means the drives benefit from not only the portability aspect but you can just have a single RAID array in your system and make the most efficient use of all your storage. the SvSan would normally sit on a RAID1+0 raid with the service console on typical 72GB drives using just 5 or 10% of the space but using vmdk's you can use all the other 90% for the SvSan Array.

    I also like the feature whereas the blocks have to be written to both sides of the mirror before the OS gets acknowledged that the file has been written to disk, this makes it truly a synchronise storage system. I would definitely use the SvSAN in production and would probably use some kind of archiving/backup strategy such as veeam backup or vRanger in the background to complete the solution.

  7.  

    I thought I would leave my first comment. I don’t know what to say except that I have enjoyed reading.Nice blog,
    I will keep visiting this blog very often.

Comments are closed.