Tag Archives: navisphere

Find your busiest LUNs Fast with Unisphere Analyzer Search

Posted on by

One of the features that has been added to Analyzer (Navisphere and Unisphere) in recent versions is the ability to search for specific LUNs based on criteria.  This feature is actually pretty powerful because the criteria itself is pretty flexible.  For example, you can search for all LUNs attached to a specific host, or with a specific set of characters in the LUN name.  In addition you can search against performance metrics like Throughput, Response Time, or LUN Utilization.  This is where it gets interesting because you can look for poorly performing LUNs really quickly.  In the following example, I am going to build a search that looks for LUNs that have EX in the name (since all of my Exchange server LUNs have EX in the name) that ALSO have high LUN utilization for several polling intervals.

Once you’ve launched Analyzer and opened an Archive, click on the binocular icon in the tool bar to bring up the search dialog. 

You can choose a predefined search (a search you previously created and saved) or a new Object Based Query.  In this example we are going to build a new query so select “Object Based Query” and choose All LUNs in the drop down box.  If you wanted, you could narrow down the search to just Pool Based LUNs, just MetaLUNs, or Component LUNs, etc.)

Next we’ll define the LUN criteria by selecting the Name property, choosing Contains, and entering the “EX” value.  This will filter the search to only those LUNs that have EX in the name.  Finally we’ll set a threshold.  In this example, I’m looking for LUNs that have a LUN Utilization value over greater than 90% for at least 10 polling samples.  I could add more LUN criteria and/or more thresholds to further narrow down the results with AND or OR combinations.

Optionally, you can save the query so that it will be listed in the “Predefined Query” list in the future.  Click Search and set or edit the name of the search.

After clicking OK, Analyzer will create a new tab and populate the results of the search.  Once the search is complete you can graph metrics for the LUNs like normal.  Here I’ve selected Utilization to show why this LUN matched the search criteria — note the high utilization between 2am and 7am.

You can get much more granular with your searches if you are looking for something specific, or use metrics like Response Time to look for poorly performing LUNs attached to a specific server.  It’s pretty flexible.  I started using the search feature recently and thought others might be interested in it.  Try it out and let me know what you think.

Performance Analysis for Clariion and VNX – Part 5 (FASTCache)

Posted on by

<< Back to Part 4 — Part 5 — Go to Part 6 >>

Sorry for the delay on this next post..  Between EMC World and my 9 month old, it’s been a battle for time…

Okay, so you have an EMC Unified storage system (Clariion, Celerra, or VNX) with FASTCache and you’re wondering how FASTCache is helping you.  Today I’m going to walk you through how to tease FASTCache performance data out of Analyzer.

I’m assuming you already have Analyzer launched and opened a NAR archive.  One thing to understand about Analyzer stats as they relate to FASTCache, is that stats are gathered at the LUN level for traditional RAID Group LUNs, but for Pool based LUNs, the stats are gathered at the pool level.  As a result graphing data for FASTCache differs for the two scenarios.

First we’ll take a look at the overall array performance.  Here we’ll see how much of the write workload is being handled by FASTCache.  In the SP Tab of Analyzer, select both SPs (be sure no LUNs or other objects are selected).  Select Write Throughput (IO/s), and then click the clipboard icon (with I’s and O’s).

Launch Microsoft Excel and paste into the sheet, and then perform the text-to-column change discussed in the previous post if necessary.

Next create a formula in the D column, adding the values for both SPs into a single total.  We’re not going to graph it quite yet though.

Back in Analyzer, deselect the two SPs, switch to the Storage Pool Tab, right-click on the array and choose Select All -> LUNs, then Select All -> Pools.

Click on a RAID Group LUN or Pool in the tree, it doesn’t matter which one, deselect Write Throughput (IO/s) and select FAST Cache Write Hits/s.  In a moment, you’ll end up with a graph like this.

Click the clipboard icon again to copy this data and paste it into a new sheet of the same workbook in Excel.  Insert a blank column between column A and B, then create a formula to add the values from column B through ZZ (ie: =SUM(C2:ZZ2).

Then copy that formula and paste into every row of column B.  This column will be our Total FAST Cache Write Hits for the whole array.  Finally, click the header for Column B to select it, then copy (CTRL-C).  Back to the first sheet — Paste the “Values” (123 Icon) into Column E.

Now that we have the Total Write IOPS and Total FAST Cache Write Hits in adjacent columns of the same worksheet, we can graph them together.  Select both columns (D and E in my example), click Insert, and choose 2D Area Chart.  You’ll get a nice little graph that looks something like the following.

Since it’s a 2D Area Chart, and not a stacked graph, the FASTCache Write IOPS are layered over the Total Write IOPS such that visually it shows the portion of total IOPS handled by FASTCache.  Follow this same process again for Read Throughput and FASTCache Read Hits.  Furthur manipulation in Excel will allow you to look at total IOPS (read and write) or drill down to individual Pools or RAID Group LUNs.

Another thing to note when looking at FASTCache stats…  FAST Cache Misses are IOPS that were not handled by FASTCache, but they may still have been handled by SP Cache.  So in order to get a feel for how many read IOs are actually hitting the disks, you’d actually want to subtract SP Read Cache Hits and Total FASTCache Read Hits (calculated similar to the above example) from SP Read Throughput.  This is similar for Write Cache Misses as well.

I hope this helps you better understand your FASTCache workload.  I’ll be working on FASTVP next, which is quite a bit more involved.

<< Back to Part 4 — Part 5 — Go to Part 6 >>

Performance Analysis for Clariion and VNX – Part 4

Posted on by

<< Back to Part 3 — Part 4 — Go to Part 5 >>

Making Lemonade from Lemons.

In the last post, we looked at the storage processor statistics to check for cache health, excessive queuing, and response time issues and found that SPA has some performance degradation which seems to be related to write IO.  Now we need to drill down on the individual LUNs to see where that IO is being directed.  This is done in the LUN tab of Analyzer.  First, right click on the storage array itself in the left pane and choose deselect all -> items.  Then click the LUN tab and right click on the top level of the tree “LUNs”, choose select all -> LUNs.  Click on one of the LUNs to highlight it, then in choose Write Throughput (IO/s) from the bottom pane.  It may take a second for Analyzer to render the graph but you’ll end up with something like this…

You’ll quickly realize that this view doesn’t really help you figure out what’s going on.  With many LUNs, there is simply too much data to display it this way.  So click the clipboard button that has the I’s and O’s in it (next to the red arrow) to copy the graph data (in CSV format) into your desktop clipboard.  Now launch Microsoft Excel, select cell A1 and type Ctrl-V to paste the data.  It will look like the following image at first, with all LUNs statistics pasted into Column A.

Now we need to break out the various metrics into their own columns to make meaningful data, so go to the Data menu and click Text to Columns (see red arrow above).  Select Delimited, click Next..  Select ONLY comma as the delimiter, then next, next, finish.  Excel will separate the data into many columns (one column per LUN).  Next we’ll create a graph that can actually tell us something.  First, click the triangle button at the upper left corner of the sheet to select all of the data in the sheet at once.  Then click the area chart icon, select Area, then the Stacked Area (see Red Arrows below) icon.  Click OK.

You’ll get a nice little graph like this one below that is completely useless because the default chart has the X and Y axis reversed from what we need for Analyzer data.

To Fix this, right click on the graph, choose “Select Data”, click the Switch row/column button, and click OK.

Now you have a useful graph like the one below.  What we are seeing here is each band of color representing the Write IOPS for a particular LUN.  You’ll note that about 6 LUNs have very thick bands, and the rest of the over 100 LUNs have very small bands.  In this case, 6 LUNs are driving more than 50% of the total write IOPS on the array. Since the column header in the Excel sheet has the LUN data, you can mouse over the color band to see which LUN it represents.

Now that you know where to look, you can go back to Analyzer, deselect all LUNs and drill down to the individual LUNs you need to look at.  You may also want to look at the hosts that are using the busy LUNs to see what they are doing.  In Analyzer, check the Write IO Size for the LUNs you are interested in and see if the size is in line with your expectations for the application involved. Very large IO sizes coupled with high IOPS (ie: high bandwidth) may cause write cache contention.  In the case of this particular array, these 6 LUNs are VMFS datastores, and based on the Thin LUN space utilization and write IO loads, I would recommend that the customer convert them from Thin LUNs to Thick LUNs in the same Virtual Pool.  Thick LUNs have better write performance and lower processor overhead compared with Thin LUNs and the amount of free space in these Thin LUNs is fairly small.  This conversion can be done online with no host impact using LUN Migration.

You can use this copy/paste technique with Excel to graph all sorts of complex datasets from Analyzer that are pretty much not viewable with the default Analyzer graph.  This process lets you select specific data or groups of metrics from an complete Analyzer archive and graph just the data you want, in the way you want to see it.  There is also a way to do this as a bulk export/import, which can be scheduled too, and I’ll discuss that in the next post.

<< Back to Part 3 — Part 4 — Go to Part 5 >>

Performance Analysis for Clariion and VNX – Part 3

Posted on by

<< Back to Part 2 — Part 3 — Go to Part 4 >>

Disclaimer: Performance Analysis is an art, not a science.  Every array is different, every application is different, and every environment has a different mix of both.  These posts are an attempt to get you started in looking at what the array is doing and pointing you in a direction to go about addressing a problem.  Keep in mind, a healthy array for one customer could be a poorly performing array for a different customer.  It all comes down to application requirements and workload.  Large block IO tends to have higher response times vs. small block IO for example.  Sequential IO also has a smaller benefit from (and sometimes can be hindered by) cache.  High IOPS and/or Bandwidth is not a problem, in fact it is proof that your array is doing work for you.  But understanding where the high IOPS are coming from and whether a particular portion of the IO is a problem is important.  You will not be able to read these series of posts and immediately dive in and resolve a performance problem on your array.  But after reading these, I hope you will be more comfortable looking at how the system is performing and when users complain about a performance problem, you will know where to start looking.  If you have a major performance issue and need help, open an case.

Starting from the top…

First let’s check the health of the front end processors and cache.  The data for this is in the SP Tab which shows both of the SPs.  The first thing I like to look at is the “SP Cache Dirty Pages (%)” but to make this data more meaningful we need to know what the write cache watermarks are set to.  You can find this by right-clicking on the array object in the upper-left pane and choosing properties.  The watermarks are shown in the SP Cache tab.

Once you note the watermarks, close the properties window and check the boxes for SPA and SPB.  In the lower pane, deselect utilization and chose SP Cache Dirty Pages (%).

Dirty pages are pages in write cache that have received new data from hosts, but have not been flushed to disk.  Generally speaking you want to have a high percentage of dirty pages because it increases the chance of a read coming from cache or additional writes to the same block of data being absorbed by the cache.  Any time an IO is served from cache, the performance is better than if the data had to be retrieved from disk.  This is why the default watermarks are usually around 60/80% or 70/90%.


What you don’t want is for dirty pages to reach 100%.  If the write cache is healthy, you will see the dirty pages value fluctuating between the high and low watermarks (as SPB is doing in the graph).  Periodic spikes or drops outside the watermarks are fine, but repeatedly hitting 100% indicates that the write cache is being stressed (SPA is having this issue on this system).  The storage system compensates for a full cache by briefly delaying host IO and going into a forced flushing state.  Forced Flushes are high priority operations to get data moved out of cache and onto the back end disks to free up write cache for more writes.  This WILL cause performance degradation.  Sustained Large Block Write IO is a common culprit here.

While we’re here, deselect Dirty Pages (%) and select Utilization (%) and look for two things here:

1.) Is either SP running at a load of higher than 70%?  This will increase application response time.  Check whether the SPs seem to fluctuate with the business day.  For non-disruptive upgrades, both SPs need to be under 50% utilization.

2.) Are the two SPs balanced?  If one is much busier than the other that may be something to investigate.

Now look at Response time (ms) and make sure that, again, both SPs are relatively even, and that Response time is within reasonable levels.  If you see that one SP has high utilization and response time but the other SP does not, there may be a LUN or set of LUNs owned by the busy SP that are consuming more array resources.  Looking at Total Throughput and Total Bandwidth can help confirm this, and then graphing Read vs. Write Throughput and Bandwidth to see what the IO operations actually are.  If both SPs have relatively similar throughput but one SP has much higher bandwidth, then there is likely some large block IO occurring that you may want to track down.

As an example, I’ve now seen two different customers where a Microsoft Sharepoint server running in a virtual machine (on a VMFS datastore) had a stuck process that caused SQL to drive nearly 200MB/sec of disk bandwidth to the backend array.  Not enough to cause huge issues, but enough to overdrive the disks in that LUN’s RAID Group, increasing queue length on the disks and SP, which in turn increased SP utilization and response time on the array.  This increased response time affected other applications unrelated to Sharepoint.

Next, let’s check the Port Queue Full Count.  This is the number of times that a front end port issued a QFULL response back to the hosts.   If you are seeing QFULL’s there are two possible causes.. One is that the Queue Depth on the HBA is too large for the LUNs being accessed.  Each LUN on the array has a maximum queue depth that is calculated using a formula based on the number of data disks in the RAID Group.  For example, a RAID5 4+1 LUN will have a queue depth of 88.  Assuming your HBA queue depth is 64 then you won’t have a problem.  However, if the LUN is used in a cluster file system (Oracle ASM, VMWare VMFS, etc) where multiple hosts are accessing the LUN simultaneously, you could run into problems here.  Reducing the HBA Queue Depth on the hosts will alleviate this issue.

The second cause is when there are many hosts accessing the same front end ports and the HBA Execution Throttle is too large on those hosts.  A Clariion/VNX front end port has a queue depth of 1600 which is the maximum number of simultaneous IO’s that port can process.  If there are 1600 IOs in queue and another IO is issued, the port responds with QFULL.   The host HBA responds by lowering its own Queue Depth (per LUN) to 1 and then gradually increasing the queue depth over time back to normal.  An example situation might be 10 hosts, all driving lots of IO, with HBA Execution Throttle set to 255.  It’s possible that those ten hosts can send a total of 2550 IOs simultaneously.  If they are all driving that IO to the same front end port, that will flood the port queue.  Reducing the HBA Execution throttle on the hosts will alleviate this issue.

Looking at the Port Throughput, you can see here that 2 ports are driving the majority of the workload.  This isn’t necessarily a problem by itself, but PowerPath could help spread the load across the ports which could potentially improve performance.

In VMWare environments specifically, it is very common to see many hosts all accessing many LUNs over only 1 or 2 paths even though there may be 4 or 8 paths available.  This is due to the default path selection (lowest port) on boot.  This could increase the chances of a QFULL problem as mentioned above or possibly exceeding the available bandwidth of the ports.  You can manually change the paths on each LUN on each host in a VMWare cluster to balance the load, or use Round-Robin load balancing.  PowerPath/VE automatically load balances the IO across all active paths with zero management overhead.

Another thing to look for is an imbalance of IO or Bandwidth on the processors.  Look specifically at Write Throughput and Write Bandwidth first as writes have the most impact on the storage system and more specifically the write cache.  As you can see in this graph, SPA is processing a fair bit more write IOPS compared to SPB.  This correlates with the high Dirty Pages and Response Time on SPA in the previous graphs.

So we’ve identified that there is performance degradation on SPA and that it is probably related to Write IO.  The next step is to dig down and find out if there are specific LUNs causing the high write load and see if those could be causing the high response times.

<< Back to Part 2 — Part 3 — Go to Part 4 >>

Performance Analysis for Clariion and VNX – Part 2

Posted on by 11 comments

<< Back to Part 1 — Part 2 — Go to Part 3 >>

Okay, so you’ve got the Analyzer enabler on your array and enabled logging, and you’ve installed Unisphere Server, Unisphere Client, and Microsoft Excel on your workstation.  Next step is to download a NAR file from the array.  In Navisphere, right click on the array, go to the Analyzer menu and retrieve an archive.  You can get the archive from either SP of the array, both have the same data.  You will eventually see multiple NAR files, each covering some period of time.  Retrieve the one for the period of time you want to look at.  You can also merge multiple files together to get larger time periods into a single analyzer session.  In Unisphere, the process is essentially the same, select the array, go to Monitoring -> Analyzer.

You’ve got your workstation set up and you have a NAR file downloaded to your workstation.  Let’s get to it.  Launch Unisphere Client from the Start Menu and connect to “localhost” when prompted.  Login to Unisphere.  You’ll see something like this…

In the drop down menu change to the “Unisphere Server – 127.0.0.1” which will change the main screen to Event Notification most likely.  Click on Monitoring, then Analyzer.

Let’s set some defaults before we open a NAR file.

  1. In the left pane, click Customize Charts
    1. In the General Tab, check the Advanced box so we can see more detailed metrics in Analyzer
    2. In the Archive Tab, under Analyzer, select Performance Detail and make sure Initially Check All Tree Objects is unchecked.
  1. Click OK to save.

In the right pane, click on Open Archive , browse to the NAR file you want to view and open.

Because the NAR file can contain many hours (sometimes multiple days) or performance data, you will be prompted to set a time range.  The default times will show all data available in the archive.  If you want to narrow down to a smaller time range, change the Graph Start and End times, otherwise just click OK.

The Performance Detail window will launch and the LUN tab will be selected.  No items should be selected and as such no data will be graphed.

My personal methodology is to take a top-down approach when it comes to performance analysis and troubleshooting.

  • Check the SP’s, Cache, and SP Ports for obvious issues.  If a user is complaining of poor performance the Cache is usually the first place I look.
  • Drill down to RAID Groups, Pools, and LUNs to find the culprits
  • Drill down to the physical disk level if necessary
  • Export data to Excel for better graphs that make it easier to see whats happening

<< Back to Part 1 — Part 2 — Go to Part 3 >>

Performance Analysis for Clariion and VNX – Part 1

Posted on by
Part 1 — Go to Part 2 >>
  • Do you have an application owner complaining about performance?
  • Do you want to get a general idea of how your array is performing?
  • Do you want to turn this.. into this..?

I’ve been doing a lot of performance analysis with EMC Clariion CX3, CX4, and VNX storage recently and have a sort of an informal methodology I follow.  I’ve had a couple customers ask me to show them how to get useful data and graphs from their arrays and more recently after posting about FASTCache and FASTVP results I’ve had even more queries on the topic.  So I’ve decided to put together a sort of how-to guide.  It will take several posts to go through the whole process, so this first post will focus on making sure you have the right tools. The Tools: First, you MUST have the Navisphere/Unisphere Analyzer enabler on the storage array.  If you don’t have it, all you can really do is send an encrypted archive to EMC for help when you have a performance problem.  Analyzer is an indispensable performance analysis tool for CX/VNX systems and is really quite powerful.  Unfortunately, many customers don’t see the value during the purchase process but end up needing it someday in the future.  Make sure Analyzer is included in EVERY array purchase.

If you haven’t already, you also need to enable Statistics on the array AND in more recent versions of FLARE you need to enable Archive Logging.  Statistics logging is enabled in the array properties dialog, shown here…

Archive Logging is enabled in the Monitoring -> Analyzer -> Data Logging dialog, shown here…

In practice, 5 minutes is a good interval for archives.  Also make sure that periodic archiving is enabled which will generate a new NAR file every so often (it depends on the interval)

Next, you need an Analyzer workstation.  You can run Analyzer directly off an array through Navisphere Manager or Unisphere but I prefer installing the software directly on my PC.  It lets me work on the analysis from home or anywhere else, and since I look at data from many different customer’s arrays’ it’s easier.  You can download the latest version of Unisphere Server and Unisphere Client directly from PowerLink (Home > Support > Software Downloads and Licensing > Downloads T-Z > Unisphere Server Software).  Once you install both, you can launch the client and log in to your local Unisphere server.   You can then open Analyzer archive files (NAR files) from any array for analysis. Third, you need a graphing tool.  I currently use Microsoft Excel 2010 on the same workstation as my Unisphere installation, which happens to be my corporate laptop.  While Analyzer does graph the data you select, there is only one type of graph available and sometimes when many objects are being graphed together it’s almost impossible to actually compare them to each other.

Another reason to use Excel is that while Analyzer has a wealth of different statistics available for all sorts of array objects, there are some exceptions right now.  For example, if you are using newer features such as FASTCache or FASTVP on your array and want to see statistics for those technologies, there is not much in Analyzer to see.  I’ll go through some methods for teasing that data out as well.

Part 1 — Go to Part 2 >>

Simplify Storage Management Today, Risk Free, and Free of Charge

Posted on by

While my peers have been blogging about the new CLARiiON and Celerra releases, both of which provide significant enhancements to the EMC CX4-based Unified platforms you already own, I thought I’d shift gears just a tad…

What if you are a Clariion CX/CX3 customer, or a CX4 customer who isn’t ready to upgrade to the newly released FLARE30 code, but want to simplify management of your storage environment, get better reporting, dashboards, wizards, etc.  Well, you are in luck.

Just as with previous versions of Navisphere and FLARE, EMC offers off-array versions of Clariion management agents, servers, and GUIs.  As of yesterday, that includes off-array versions of Unisphere.  If you are a current customer of Clariion, you can login to PowerLink and download the Unisphere off-array software and build a management station.  After installation, you can manage your existing Clariion CX/CX3/CX4 hardware without upgrading the FLARE code.  As you upgrade your CX4 systems to FLARE30, new features will be enabled in Unisphere, and as you upgrade your Celerra NS systems to DART6, they can be added to the Clariion management domain and managed from the very same Unisphere instance.  How’s that for easy and convenient?

But what do you get by using Unisphere to manage your non-FLARE30 systems?  Unfortunately, you won’t be able to take advantage of FASTCache, FAST, Compression, and other features that are only available in FLARE 30, but there are some advantages..

First and foremost, Unisphere completely dumps the Navisphere tree-based management view and replaces it with end-result based tasks.  So instead of creating several objects to provision raid groups and LUNs, then present to a host, you just run the “Allocate” wizard and select the array, disks/raid group/pool, LUN size, hosts, etc and commit.

Second, upon launching Unisphere and logging in, you are immediately presented with dashboard views showing the amount of used/available storage, and active alerts, all customizable, so you can see the state of your entire CLARiiON storage environment “at-a-glance”.

To install Unisphere today, login to Powerlink, browse to “Support > Software Downloads and Licensing > Downloads T-Z > Unisphere Server Software” and download “EMC Unisphere Server” and “EMC Unisphere Client”.  Install them both to your Windows system and fire it up.  If you have Navisphere off-array software already installed, Unisphere will upgrade the existing installation for you.  You will also want to download and install Unisphere Service Manager (USM), also from Powerlink at “Support > Software Downloads and Licensing > Downloads T-Z > Unisphere Service Manager (USM).”  USM will provide various support and service related tools including active technical advisories for your storage arrays.

Begin using Unisphere today and you get some immediate benefits, plus you will be ready to take advantage of new features enabled with FLARE30 (FAST, FASTCache, Compression, etc) as well as managing NAS across all of your Celerra systems once they are upgraded to DART 6.  As a bonus, you’ll have a chance to get familiar with Unisphere before a future FLARE upgrade or new EMC Unified purchase forces you to learn it.

And did I mention you don’t have to buy anything or introduce risk with a firmware upgrade?

EMC CLARiiON and Celerra Updates – Defining Unified Storage

Posted on by

This past week, during EMC World 2010 in Boston, EMC made several announcements of updates to the Celerra and CLARiiON midrange platforms.  Some of the most impressive were new capabilities coming to CLARiiON FLARE in just a couple short months.  Major updates to Celerra DART will coincide with the FLARE updates and if you are already running CLARiiON CX4 hardware, or are evaluating CX4 (or Celerra), you will want to check these new features out.  They will be available to existing CX4(120,240,480,960)/NS(120,480,960) systems as part of a software update.

Here’s a list of key changes in FLARE 30:

  • Unified management for midrange storage platforms including CLARiiON and Celerra today, plus RecoverPoint, Replication Manager and more in the future.  This is a true single pane of glass for monitoring AND managing SAN, NAS, and data protection and it’s built in to the platform.  “EMC Unisphere” replaces Navisphere Manager and Celerra Manager and supports multiple storage systems simultaneously in a single window. (Video Demo)
  • Extremely large cache (ie: FASTCache) – Up to 2TB of additional read/write cache in CLARiiON using SSDs (Video Demo)
  • Block level Fully Automated Storage Tiering (ie: sub-LUN FAST) – Fully automated assignment of data across multiple disk types
  • Block Level Compression – Compress LUNs in the CLARiiON to reduce disk space requirements
  • VAAI Support – Integrate with vSphere ESX for improved performance

These features are in addition to existing features like:

  • Seamless and non-disruptive mobility of LUNs within a storage array – (via Virtual LUNs)
  • Non-Disruptive Data Migration – (via PowerPath Migration Enabler)
  • VMWare Aware Storage Management – (Navisphere, Unisphere, and vSphere Plugins giving complete visibility  and self-service provisioning for VMWare admins (Video Demo) AND Storage Admins
  • CIFS and NFS Compression – Compress production data on Celerra to reduce disk space requirements including VMs
  • Dynamic SAN path load balancing – (via PowerPath)
  • At-Rest-Encryption – (via PowerPath w/RSA)
  • SSD, FC, and SATA drives in the same system – Balance performance and capacity as needed for your application
  • Local and Remote replication with array level consistency – (SnapView, MirrorView, etc)
  • Hot-swap, Hot-Add, Hot-Upgrade IO Modules – Upgrade connectivity for FC, FCoE, and iSCSI with no downtime
  • Scale to 1.8PB of storage in a single system
  • Simultaneously provide FC, iSCSI, MPFS, NFS, and CIFS access

All together, this is an impressive list of features for a single platform. In fact, while many of EMC’s competitors have similar features, none of them have all of them in the same platform, or leverage them all simultaneously to gain efficiency.  When CLARiiON CX4 and Celerra NS are integrated and managed as a single Unified storage system with EMC Unisphere there is tremendous value as I’ll point out below…

Improve Performance easily…

  • Install a couple SSD drives into a CLARiiON and enable FASTCache to increase the array’s read/write cache from the industry competive 4GB-32GB up to 2TB of array based non-volatile Read AND Write cache available to ALL applications including NAS data hosted by the array.
  • Install PowerPath on Windows, Linux, Solaris, AND VMWare ESX hosts to automatically balance IO across all available paths to storage.  PowerPath detects latency and queuing occuring on each path and adjusts automatically, improving performance at the storage array AND for your hosts.  This is a huge benefit in VMWare environments especially.
  • When VMWare releases the updated version of vSphere ESX that supports VAAI, ESX will be able to leverage VAAI support in the CLARiiON to reduce the amount of IO required to do many tasks, improving performance across the environment again.
  • Upgrade from 1gbe iSCSI to 10gbe iSCSI, or from 4gbe FiberChannel to 8gbe FiberChannel, without a screwdriver or downtime.
  • Provide NAS shared file access with block-level performance for any application using EMC’s MPFS protocol.

Improve Efficiency and cost easily…

  • Create a single pool of storage containing some SSD, some FC, and some SATA drives, that automatically monitors and moves portions of data to the appropriate disk type to both improve performance AND decrease cost simultaneously.
  • Non-disruptively compress volumes and/or files with a single click to save 50% of your disk space in many cases.
  • Convert traditional LUNs to more efficient Thin-LUNs non-disruptively using PowerPath Migration Enabler, saving more disk space.

Increase and Manage Capacity easily…

  • Add additional storage non-disruptively with SSD, FC, and SATA drives in any mix up to 1.8PB of raw storage in a single CLARiiON CX4.
  • Using FASTCache, iSCSI, FC, and FCoE connectivity simultaneously does not reduce total capacity of the system.
  • Expanding LUNs, RAID Groups, and Storage Pools is non-disruptive.
  • Migrating LUNs between RAID groups and/or Storage Pools is non-disruptive using built-in CLARiiON LUN Migration, as is migrating data to a different storage array (using PowerPath Migration Enabler)!
  • Balancing workload between storage processors is non-disruptive and at individual LUN granularity.

Protect your data easily…

  • Snapshot, Clone, and Replicate any of the data to anywhere with built in array tools that can maintain complete data consistency across a single, or multiple applications without installing software.
  • Maintain application consistency for Exchange, SQL, Oracle, SAP, and much more, even within VMWare VMs, while replicating to anywhere with a single pane-of-glass.
  • Encrypt sensitive data seamlessly using PowerPath Encryption w/RSA.

Maintain Flexibility…

  • While you can do all of these things quickly and simply, you still have the flexibility to create traditional RAID sets using RAID 0, 1, 5, 6, and 10 where you need highly predicable performance, or tune read and write cache at the array and LUN level for specific workloads.  Do you want read/write snapshots? How about full copy clones on completely separate disks for workload isolation and failure protection? What about the ability to rollback data to different points in time using snapshots without deleting any other snapshots?  EMC Storage arrays have been able to do this for a long time and that hasn’t changed.

There are few manufacturers aside from EMC that can provide all of these capabilities, let alone provide them within a single platform.  That’s the definition of simple, efficient, Unified Storage in my opinion.

NetApp and EMC: Startup and First Impressions

Posted on by

In the last post, I talked about a project I am involved in right now to deploy NetApp storage alongside EMC for SAN and NAS.  Today, I’m going to talk about my first impressions of the NetApp during deployment and initial configuration.

First Impressions

I’m going to be pretty blunt — I have been working with EMC hardware and software for a while now, and I’m generally happy with the usability of their GUIs.  Over that time, I’ve used several major revisions of Navisphere Manager and Celerra Manager, and even more minor revisions, and I’ve never actually found a UI bug.  To be clear, EMC, IBM, NetApp, HDS, and every other vendor have bugs in their software, and they all do what they can to find and fix them quickly, but I just haven’t personally seen one in the EMC UIs despite using every feature offered by those systems. (I have come across bugs in the firmware)

Contrast that with the first day using the new NetApp, running the latest 7.3.1.1L1 code, where we discovered a UI problem in the first 10 minutes.  When attempting to add disks to an aggregate in FilerView, we could not select FC disk to add.  We could, however, add SATA disk to the FC aggregate.  The only way to get around the issue was to use the CLI via SSH.  As I mentioned in my previous post, our NetApp is actually an IBM nSeries, and IBM claims they perform additional QC before their customers get new NetApp code.

Shortly after that, we found a second UI issue in FilerView.  When creating a new Initiator group, FilerView populates the initiator list with the WWNs that have logged in to it.  Auto-populating is nice but the problem is that FilerView was incorrectly parsing the WWN of the server HBAs and populating the list with NodeWWNs rather than PortWWNs.  We spent several hours trying to figure out why the ESX servers didn’t see any LUNs before we realized that the WWNs in the Initiator group were incorrect.  Editing the 2nd digit on each one fixed the problem.

I find it interesting that these issues, which seemed easy to discover, made it through the QC process of two organizations.  ONTap 7.3.2RC1 is available now, but I don’t know if these issues were addressed.

Manageability

As far as FilerView goes, it is generally easy to use once you know how NetApp systems are provisioned.  The biggest drawback in an HA-Filer setup is the fact you have to open FilerView separately for each Filer and configure each one as a separate storage system.  Two HA-Filer pairs? Four FilerView windows.  If you include the initial launch page that comes up before you get to the actual FilerView window, you double the number of browser windows open to manage your systems.  NetApp likes to mention that they have unified management for NAS and SAN where EMC has two separate platforms, each with their own management tools. EMC treats the two storage processors (SPs) in a Clariion in a much more unified manner, and provisioning is done against the entire Clariion, not per SP.  Further, Navisphere can manage many Clariions in the same UI.  Celerra Manager acts similarly for EMC NAS.  Six of one, half a dozen of the other some say, except that I find that I generally provision NAS storage and SAN storage at different times, and I’d rather have all of the controllers/filers in the same window than NAS and SAN in the same window.  Just my preference.

I should mention, NetApp recently released System Manager 1.0 as a free download.  This new admin tool does present all of the controllers in one view and may end up being a much better tool than FilerView.  For now, it’s missing too many features to be used 100% of the time and it’s Windows only since it’s based on MMC.  Which brings me to my other problem with managing the NetApp.  Neither FilerView nor System Manager can actually do everything you might need to do, and that means you end up in the CLI, FREQUENTLY.  I’m comfortable with CLIs and they are extremely powerful for troubleshooting problems, and especially for scripting batch changes, but I don’t like to be forced into the CLI for general administration.  GUI based management helps prevent possibly crippling typos and can make visualizing your environment easier.  During deployment, we kept going back and forth between FilerView and CLI to configure different things.  Further, since we were using MultiStore (vFilers) for CIFS shares and disaster recovery, we were stuck in the CLI almost entirely because System Manager can’t even see vFilers, and FilerView can only create them and attach volumes.

Had I not been managing Celerra and Clariion for so long, I probably wouldn’t have noticed the above problems.  After several years of configuring CIFS, NFS, iSCSI, Virtual DataMovers, IP Interfaces, Snapshots, Replication, and DR Failover, etc. on Celerra, as well as literally thousands of LUNs for hundreds of servers on Clariion, I don’t recall EVER being forced to use the CLI.  CelerraCLI and NaviCLI are very powerful, and I have written many scripts leveraging them, and I’ll use CLI when troubleshooting an issue.  But for every single feature I’ve ever used on the Celerra or Clarrion, I was able to completely configure from start to finish using the GUI.  Installing a Celerra from scratch even uses a GUI based installation wizard.  Comparing Clariion Storage Groups with NetApp Initiator groups and LUN maps isn’t even fair.  For MS Exchange, I mapped about 50 LUNs to the ESX cluster, which took about 30 minutes in FilerView.  On the Clariion, the same operation is done by just editing the Storage Group and checking each LUN, taking only a couple minutes for the entire process.

Now, all of the above commentary has to do with the management tools, UIs, and to some degree personal preferences, and does not have any bearing on the equipment or underlying functionality.  There are, of course, optional management tools like Operations Manager, Provisioning Manager, and Protection Manager available from NetApp, just as there is Control Center from EMC (which incidentally can monitor the NetApp) or Command Central from Symantec.  Depending on your overall needs, you may want to look at optional management tools; or, FilerView may be perfectly fine.

In the next post,  I’ll get into more specifics about how the Exchange 2007 CCR cluster turned out in this new environment, along with some notes on making CCR truly redundant.  I’ve also been working on the NAS side of the project, so I’ll also post about that some time soon.