The first application we tackled after deploying the NetApp system was Exchange 2007. We had deployed Exchange 2007 recently, running in CCR clusters on VMWare ESX. Since each node of a CCR cluster has it’s own copy of the database we wanted to put one node from each cluster onto the NetApp, leaving the other nodes on the Clariion. This environment is entirely FiberChannel, no iSCSI deployed and as such the Exchange servers are using VMWare Raw Devices for the database and log disks. This poses a problem that we didn’t discover until later which I will discuss in a future post about replicating Exchange with NetApp.
Re-Architecting the environment to fit the storage
The first thing we discovered was that neither IBM/NetApp nor EMC would support the same host HBAs zoned to multiple brands of storage. So we had to split the ESX cluster into two clusters, one on each storage platform. Luckily the Exchange environment was isolated on it’s own six node cluster so it was easy to split everything in half.
Next we learned that due to NetApp’s updated active/active mode with proxy paths in ONTap 7.3, VMWare ESX 3.x randomly selects paths when rescanning HBAs and will pick non-optimized paths to the LUNs. This still works but is not ideal as it increases IO latency, causing the Filer to send autosupport emails periodically warning of the problem. Installing the NetApp Host Utilities for ESX onto the ESX hosts themselves allows you to run a script that assigns persistent paths evenly across the HBAs. The script works as advertised but as far as I can tell you have to run the script each time you add a new LUN to the ESX server. It would be much better if it were more automated.
Actually, if you are running ESX4.0 the scenario changes since NetApp ONTap 7.3+, Clariion FLARE 26+, and ESX4 all support ALUA making this problem all but disappear and improving fabric resiliency. Unfortunately for us, ESX4 is still a bit new and hasn’t been rolled out into production yet. NetApp also released tools for vCenter 4.0 that allow you to do the path assignment and other tasks from within vCenter rather than at the command line. EMC also now has PowerPath available for ESX4.0 which will not only manage paths but load balance across all paths for increased performance and lower latency.
VirtualStorageGuy has blogged already about the NetApp/EMC/vSphere plug-ins and there is even a Powerpoint available.
Finally, during the sales process NetApp pushed their de-duplication features (A-SIS) quite a bit and stressed how much disk space we could save in a VMWare environment. During deployment we were informed that if your VMs (VMDKs and VMFS) were not properly partition aligned de-duplication wouldn’t work well or at all. Since this environment has several hundred VMs built over several years by many people, and aligning the system (C:) drive of a Windows VM is difficult, the benefit would be minimal for us. Luckily NetApp has provided tools that can scan and align VMDKs without having to repartition the disks. We have not tested this yet. Partition Alignment is a best practice for ANY SAN storage system so we can’t fault NetApp for this problem; it’s just a fact of life.
But is it REALLY Redundant?
Even with two storage systems, with independent VMWare clusters, each hosting half of the Exchange cluster environment, a problem with either array could still take down and entire Exchange cluster. This is due to the File Share Witness (FSW) component used in a Majority Node Set (MNS) cluster like Exchange CCR. The idea behind the FSW in an MNS cluster is to prevent a condition known as Split Brain. Since a MNS cluster does not have a quorum disk, it relies entirely on network communication between the nodes to determine cluster status and make decisions about which nodes should become active. In the event that the two nodes lose communication with each other, each node will check for the FSW and if it is still available, it assumes that the other cluster node is down and proceeds to bring cluster resources online (if they weren’t already). Without the FSW, both nodes would potentially go active and there could be issues with inconsistent data, etc. This is the split-brain condition.
Typically, each cluster has a single FSW on a separate server (the CAS servers in our case). With the redundancy storage model we moved to, the FSW became a single point of failure. If we put the FSW on EMC storage with NodeA, and NodeB on the IBM/NetApp storage, a problem with the EMC array could take down both the cluster node AND the FSW at the same time. The surviving cluster node on the IBM/NetApp array would go down or stay down to prevent split-brain since the FSW was not available. Moving the FSW to the IBM/NetApp array presents the same problem on opposite side of the cluster. Incidentally, we proved this problem in lab testing to be sure. The solution is to move the FSW off of BOTH arrays, to either a dedicated physical server with internal disk, or a third storage array if you have one. There was a second EMC array in production so we moved the FSW there. In the new configuration, a complete outage of any single storage array would not take down the Exchange environment.
So far this new 3-way split environment is working fine, performance on the EMC and NetApp arrays is fine for Exchange. Using the same number of disks on the NetApp array yields about twice as much usable space as the EMC due to RAID-DP vs RAID-10 but overall performance is similar. Theoretically that means we could allow for more growth of the Exchange databases but in reality that is not always the case. My next update will be about Exchange replication using SnapManager and SnapMirror and how that has effectively negated the remaining free space in the NetApp aggregate.