Tag Archives: celerra

NetApp and EMC: Startup and First Impressions

Posted on by

In the last post, I talked about a project I am involved in right now to deploy NetApp storage alongside EMC for SAN and NAS.  Today, I’m going to talk about my first impressions of the NetApp during deployment and initial configuration.

First Impressions

I’m going to be pretty blunt — I have been working with EMC hardware and software for a while now, and I’m generally happy with the usability of their GUIs.  Over that time, I’ve used several major revisions of Navisphere Manager and Celerra Manager, and even more minor revisions, and I’ve never actually found a UI bug.  To be clear, EMC, IBM, NetApp, HDS, and every other vendor have bugs in their software, and they all do what they can to find and fix them quickly, but I just haven’t personally seen one in the EMC UIs despite using every feature offered by those systems. (I have come across bugs in the firmware)

Contrast that with the first day using the new NetApp, running the latest 7.3.1.1L1 code, where we discovered a UI problem in the first 10 minutes.  When attempting to add disks to an aggregate in FilerView, we could not select FC disk to add.  We could, however, add SATA disk to the FC aggregate.  The only way to get around the issue was to use the CLI via SSH.  As I mentioned in my previous post, our NetApp is actually an IBM nSeries, and IBM claims they perform additional QC before their customers get new NetApp code.

Shortly after that, we found a second UI issue in FilerView.  When creating a new Initiator group, FilerView populates the initiator list with the WWNs that have logged in to it.  Auto-populating is nice but the problem is that FilerView was incorrectly parsing the WWN of the server HBAs and populating the list with NodeWWNs rather than PortWWNs.  We spent several hours trying to figure out why the ESX servers didn’t see any LUNs before we realized that the WWNs in the Initiator group were incorrect.  Editing the 2nd digit on each one fixed the problem.

I find it interesting that these issues, which seemed easy to discover, made it through the QC process of two organizations.  ONTap 7.3.2RC1 is available now, but I don’t know if these issues were addressed.

Manageability

As far as FilerView goes, it is generally easy to use once you know how NetApp systems are provisioned.  The biggest drawback in an HA-Filer setup is the fact you have to open FilerView separately for each Filer and configure each one as a separate storage system.  Two HA-Filer pairs? Four FilerView windows.  If you include the initial launch page that comes up before you get to the actual FilerView window, you double the number of browser windows open to manage your systems.  NetApp likes to mention that they have unified management for NAS and SAN where EMC has two separate platforms, each with their own management tools. EMC treats the two storage processors (SPs) in a Clariion in a much more unified manner, and provisioning is done against the entire Clariion, not per SP.  Further, Navisphere can manage many Clariions in the same UI.  Celerra Manager acts similarly for EMC NAS.  Six of one, half a dozen of the other some say, except that I find that I generally provision NAS storage and SAN storage at different times, and I’d rather have all of the controllers/filers in the same window than NAS and SAN in the same window.  Just my preference.

I should mention, NetApp recently released System Manager 1.0 as a free download.  This new admin tool does present all of the controllers in one view and may end up being a much better tool than FilerView.  For now, it’s missing too many features to be used 100% of the time and it’s Windows only since it’s based on MMC.  Which brings me to my other problem with managing the NetApp.  Neither FilerView nor System Manager can actually do everything you might need to do, and that means you end up in the CLI, FREQUENTLY.  I’m comfortable with CLIs and they are extremely powerful for troubleshooting problems, and especially for scripting batch changes, but I don’t like to be forced into the CLI for general administration.  GUI based management helps prevent possibly crippling typos and can make visualizing your environment easier.  During deployment, we kept going back and forth between FilerView and CLI to configure different things.  Further, since we were using MultiStore (vFilers) for CIFS shares and disaster recovery, we were stuck in the CLI almost entirely because System Manager can’t even see vFilers, and FilerView can only create them and attach volumes.

Had I not been managing Celerra and Clariion for so long, I probably wouldn’t have noticed the above problems.  After several years of configuring CIFS, NFS, iSCSI, Virtual DataMovers, IP Interfaces, Snapshots, Replication, and DR Failover, etc. on Celerra, as well as literally thousands of LUNs for hundreds of servers on Clariion, I don’t recall EVER being forced to use the CLI.  CelerraCLI and NaviCLI are very powerful, and I have written many scripts leveraging them, and I’ll use CLI when troubleshooting an issue.  But for every single feature I’ve ever used on the Celerra or Clarrion, I was able to completely configure from start to finish using the GUI.  Installing a Celerra from scratch even uses a GUI based installation wizard.  Comparing Clariion Storage Groups with NetApp Initiator groups and LUN maps isn’t even fair.  For MS Exchange, I mapped about 50 LUNs to the ESX cluster, which took about 30 minutes in FilerView.  On the Clariion, the same operation is done by just editing the Storage Group and checking each LUN, taking only a couple minutes for the entire process.

Now, all of the above commentary has to do with the management tools, UIs, and to some degree personal preferences, and does not have any bearing on the equipment or underlying functionality.  There are, of course, optional management tools like Operations Manager, Provisioning Manager, and Protection Manager available from NetApp, just as there is Control Center from EMC (which incidentally can monitor the NetApp) or Command Central from Symantec.  Depending on your overall needs, you may want to look at optional management tools; or, FilerView may be perfectly fine.

In the next post,  I’ll get into more specifics about how the Exchange 2007 CCR cluster turned out in this new environment, along with some notes on making CCR truly redundant.  I’ve also been working on the NAS side of the project, so I’ll also post about that some time soon.

NetApp and EMC: Real world comparisons

Posted on by

I’ve been tasked recently on a project to increase availability of applications through the use of multiple/disparate storage systems.  This environment has heavily invested in EMC Clariion and Celerra storage systems over the past few years and needed a non-EMC platform from which to build the second half of a redundant storage environment.  For various reasons I won’t go into here, we chose IBM nSeries as that second platform. (Since the IBM system is rebranded NetApp FAS, I will refer to this as a NetApp filer.)  I’ve been working on implementing the new equipment as well as integrating it into the Business Continuity strategy.

The overall strategy is to continue to use the EMC Clariion/Celerra systems for production and disaster recovery replication and split applications between and across the two storage platforms for local redundancy.  The NetApp will also perform disaster recovery replication for some of the applications.  Here’s a really simple diagram that might help if the description is confusing:

EMC and NetApp Redundancy

EMC and NetApp Redundancy

Now this may sound easy, but it is, in fact, NOT straightforward.  This strategy requires close coordination with application owners and careful planning.  As we move forward on this project, I’ll talk about various idiosyncrasies, caveats, and problems we’ve faced, how we got around them, and I’ll also talk a lot about the differences between the Clariion/Celerra and NetApp platforms’ features and functionality, application support, and manageability.  These comparisons will include using both systems with FiberChannel connections as well as CIFS/NFS NAS, all in conjunction with DR replication and failover.

To start off, I figure we should compare some of the terminology between EMC and NetApp systems.  Some terms don’t directly translate, but I matched them up as close as I could and noted where there is no equivalent.   Below are two tables: one for Block Storage, and the other for NAS Storage.  Click on them to see full size versions.

EMC-NetApp Block Storage Terminology table

EMC-NetApp Block Storage Terminology

EMC-NetApp NAS Storage Terminology

EMC-NetApp NAS Storage Terminology

In the next update, I’ll start talking about the deployment itself.  The point of these articles is to discuss the differences, advantages, and disadvantages of each platform so that you can understand how each one might work in your environment.  I do not intend to disparage either platform or vendor.  I will try to be vendor agnostic as much as possible, and I do feel like I have a somewhat unique position of comparing new and recent hardware and firmware from both vendors, in the same production capacities, simultaneously, in the same environment.  I am NOT comparing old ONTap code to new FLARE/DART code or vise-versa, nor am I comparing old Clariion CX hardware to new NetApp/IBM hardware, etc.

Stay tuned!

Backup Everything – DR and Offsite copies

Posted on by

Okay, now that I’ve talked about backing up the datacenter with NetBackup and DataDomain, and backing up remote sites with NetBackup and PureDisk, it’s time to discuss how to get all that data offsite to protect against a catastrophic event at the datacenter.

As mentioned before we have a primary datacenter with the majority of our systems including the backup environment, and a secondary “disaster recovery” datacenter to which we replicate tier 1 applications for business continuity purposes.  Since we really wanted to get away from using tapes and instead store the backups on disk in our datacenter we have a second backup environment in the DR datacenter and we replicate the backup data there.

There are several ways to replicate backup data between two sites but most of them have drawbacks..

1.) Duplicate the backup data from disk to tape and ship the tapes to the remote site to be ready for restore.  This is the easiest and probably cheapest way.  But there’s that pesky tape yet again with it’s media handling and shipping.  And restore could take a while since you have to deal with restoring the catalog from tape, then importing the media, etc.

2.) Duplicate the backup data directly from the local disk to the disk in the second location across the WAN.  This is not very feasible with any significant amount of data because every byte of data that is backed up in the datacenter has to be copied across the much slower WAN.  It could take many days to duplicate a single nights’ backup.  You’d also need a special Catalog backup job that wrote to a storage device across the WAN.  The good here is that the backup application knows there is a second copy of the data and knows how to find it.

3.) Replicate the data with the backup storage devices’s native replication.  Whether it’s PureDisk, Avamar, or DataDomain, pretty much every source-based or target-based deduplication solution has replication built in that leverages the deduplication to reduce the amount of data that traverses the WAN.  The advantage here is that you can have a copy of all of your backup data in a second location in a much shorter time than a traditional copy process.  If your deduplication device stores the data with 10:1 compression, then your WAN usage is reduced by 90%.  The savings in practice is actually better than that.  The drawback is that the backup application (hence the catalog) has no knowledge that there is a second copy of the backup data and after recovering the catalog, you would need to import all of the disk-based media which could take a long time.

4.) Leverage NetBackup Lifecycle Policies with Symantec OpenSTorage (OST) and an OST-capable backup storage system like DataDomain or PureDisk(with PDDO).  Basically this has all the advantages of option #2, where it is a catalog-aware duplication, combined with the advantages of WAN bandwidth savings from option #3.  Time to copy the data offsite is much shorter due to deduplication, and time to restore is very fast since the data is already in the catalog and available on the disk.

OpenSTorage (OST) is a network protocol that Symantec developed to interface with disk-based backup storage systems and DataDomain was an early adopter of OST.  OST allows Netbackup to control replication between OST-capable storage systems and keep track of the replicated copies of backups in the Catalog just as if Netbackup had made both copies itself.  OST is also used as the protocol to send the backup data to the storage device as opposed to CIFS/NFS or VTL.  DataDomain appliances support OST as does PureDisk when used in conjunction with the PDDO option discussed earlier.   In NetBackup, replication controlled by OST is called “optimized duplication” and is controlled primarily through Lifecycle Policies.

Traditionally, when creating NetBackup job policies, the administrator will specify a Storage Unit (either a disk storage unit or a tape library or drive) that the job policy will send backups to.  Lifecycle Policies are treated like Storage Units as far as the Job Policy is concerned but the Lifecycle Policy includes a list of storage units, each with it’s own data retention, that the backup data must be stored onto in order for NetBackup to consider the data fully protected.  Typically there is a “Backup” target which is where the actual data coming from the client is stored, followed by one or more “Duplication” targets.  After the backup job completes, NetBackup will copy the backup data from the “backup” location to all of the “duplication” locations.  This works with pretty much any type of storage and you can mix and match tape and disk in the same policy.  Since these are duplication operations, NetBackup will read ALL of the data from the backup location, and write ALL of the data to each duplication location.  This can take a long time even on the local network and trying to offsite a lot of data over the WAN is not very feasible.

With OST, the lifecycle policy operates exactly the same except that it uses “optimized duplication”, instructing the storage device to copy the file rather than performing the copy through a media server.  So in the case of DataDomain, OST issues the command to the DDR, the DDR then copies the file to the second DDR in the remote site and gets all the benefits of deduplication and compression between the two.  The media server doesn’t actually do any work.  Once the duplication is complete, the DDR notifies NetBackup and the catalog is updated with a record of the second copy of the backup.  Lifecycle Policies are fully automated, you can’t even restart a failed duplication, so in the event of a transient failure like a WAN hiccup NetBackup will retry a duplication job forever until it succeeds in order to satisfy the lifecycle policy.

As you can probably surmise, this is REALLY nice for a tape-less backup environment.  Our DD690 offsites over 9TB of data every night DURING the backup window.  When the last backup job completes, the offsite copies are complete within 30 minutes.  And there is absolutely no management of the offsite process or duplication jobs besides configuring the lifecycle policies up front.  The drawback to regular Netbackup lifecycle policies is that all duplications are taken from the initial backup copy which limits what you can do with the copies.

Enter NetBackup 6.5.4…  Despite the small 6.5.3 -> 6.5.4 version number change, the 6.5.4 release had quite a few new features added.  The biggest one was a revamping of the Lifecycle Policy engine to allow for nested duplications.  Now you can create a copy of a backup, then create multiple copies from the copy, then create copies from the other copies.  Why is this useful?

Remember when I discussed using NetBackup with PDDO to backup remote sites?  Well the data backed up from the remote site is all stored in the primary datacenter and we need to get the second copy to the DR datacenter.  Plus, we wanted to have a small cache of recently backed up data sitting on the remote media server for fast restore.  Well, nested lifecycle’s are the key.  The lifecycle writes the initial backup copy onto the media server’s local disk which is configured as a capacity-managed staging area (ie: it stores as much as it can and expires data when it needs more space for new backups).  The lifecycle then creates a duplicate of the backup onto the PureDisk storage unit in the primary datacenter.  Since bandwidth to the remote site is very limited we don’t want to copy it from the remote site twice so the lifecycle has a second duplication nested under the first to copy it to the DR datacenter.  The source of the second copy is the primary datacenter copy, NOT the remote media server copy.

Where else can we use this?  Let’s consider our tape-less datacenter backups..  We backup the clients to the DataDomain in our primary datacenter, then using a lifecycle policy and OST, create a copy on the DataDomain in the DR datacenter.  If we also wanted to have a tape copy for long term archive or vaulting we could create a nested duplication to make a copy to a tape library in the DR datacenter from the disk copy that is also in the DR datacenter.  Without nested lifecycle’s the only workable solution would be to create the tape in the primary datacenter.  Every copy of the backup made via the lifecycle policy whether it is using OST or not is maintained by the catalog and easily used for restore.  Furthermore, using OST as the protocol between Netbackup and DataDomain actually increases throughput to the DataDomain DDR systems by approximately 2X vs VTL/CIFS/NFS.

Now to the caveats.. Optimized duplication via OST is only available when you are using OST as the protocol between the media server and the storage unit.  This means it doesn’t work with VTL even when the DataDomain IS the VTL.  OST only works over an ethernet network which is why we skipped VTL completely and used 10gbps networks for the DDR connections.  We even skipped VTL/Tape for the NAS systems, connected them directly to the 10gbps network and use 3-way NDMP to backup them up over the network, through the media servers, to the DataDomain.  We get the benefit of lifecycle policies, optimized duplication, and I may have mentioned before–no pesky tape even with NDMP/NAS backups.  And the interesting thing is that with the 10gbps connection, the NDMP dumps are faster than direct fiber to tape.

There were other enhancements to NetBackup 6.5.4 centered around OST functionality but the lifecycle policy improvements were huge in my opinion.

To cover the catalog replication, we run Netbackup hot catalog backups to a CIFS share that is hosted by the DataDomain.  The DDR replicates that share using DataDomain native replication to the DDR in the DR datacenter where the same data is available via a similar CIFS share.  Our standby Netbackup master server is already connected to the CIFS share for catalog restore and connected to the DDR via OST.  A single operation restores the catalog from the replicated copy.  In a real disaster we can begin restoring user data within 30 minutes from the DR datacenter.

Capacity vs. Performance: De-duplication

Posted on by

This is the 3rd part of a multi-part discussion on capacity vs performance in SAN environments.  My previous post discussed the use of thin provisioning to increase storage utilization.  Today we are going to focus on a newer technology called Data De-Duplication.

Data De-Duplication can be likened to an advanced form of compression.  It is a way to store large amounts of data with the least amount of physical disk possible.

De-duplication technology was originally targeted at lowering the cost of disk-based backup.  DataDomain (recently acquired by EMC Corp) was a pioneer in this space.  Each vendor has their own implementation of de-duplication technology but they are generally similar in that they take raw data, look for similarities in relatively small chunks of that data and remove the duplicates.  The diagram below is the simplest one I could find on the web.  You can see that where there were multiple C, D, B, etc blocks in the original data, the final “de-duplicated” data has only one of each.  The system then stores metadata (essentially pointers) to track what the original data looked like for later reconstruction.

Image from eWeek Article

The first and most widely used implementations of de-dupe technology were in the backup space where much of the same data is being backed up during each backup cycle and many days of history (retention) must be maintained.  Compression ratios using de-duplication alone can easily exceed 10:1 in backup systems.  The neat thing here is that when the de-duplication technology works at the block-level (rather than file-level) duplicate data is found across completely disparate data-sets.  There is commonality between Exchange email, Microsoft Word, and SQL data for example.  In a 24 hour period, backing up 5.7TB of data to disk, the de-dupe ratio in my own backup environment is 19.2X plus an additional 1.7X of standard compression on the post de-dupe’d data, consuming only 173.9GB of physical disk space.  The entire set of backed up data, totaling 106TB currently stored on disk, consumes only 7.5TB of physical disk.  The benefits are pretty obvious as you can see how we can store large amounts of data in much less physical disk space.

There are numerous de-duplication systems available for backup applications — DataDomain, Quantum DXi, EMC DL3D, NetApp VTL, IBM Diligent, and several others.  Most of these are “target-based” de-duplication systems because they do all the work at the storage layer with the primary benefit being better use of disk space.  They also integrate easily into most traditional backup environments.  There are also “source-based” de-duplication systems — EMC Avamar and Symantec PureDisk are two primary examples.  These systems actually replace your existing backup application entirely and perform their work on the client machine that is being backed up.  They save disk space just like the other systems but also reduce bandwidth usage during the backup which is extremely useful when trying to get backups of data across a slow network connection like a WAN.

So now you know why de-duplication is good, and how it helps in a backup environment.. But what about using it for primary storage like NAS/SAN environments?  Well it turns out several vendors are playing in that space as well.  NetApp was the first major vendor to add de-duplication to primary storage with their A-SIS (Advanced Single Instance Storage) product.  EMC followed with their own implementation of de-duplication on Celerra NAS.  They are entirely different in their implementation but attempt to address the same problem of ballooning storage requirements.

EMC Celerra de-dupe performs file-level single-instancing to eliminate duplicate files in a filesystem, and then uses a proprietary compression engine to reduce the size of the files themselves.  Celerra does not deal with portions of files.  In practice, this feature can significantly reduce the storage requirements for a NAS volume.  In a test I performed recently for storing large amounts of Syslog data, Celerra de-dupe easily saved 90% of the disk space consumed by the logs and it hadn’t actually touched all of the files yet.

NetApp’s A-SIS works at a 4KB block size and compares all data within a filesystem regardless of the data type.  Besides NAS shares, A-SIS also works on block volumes (ie: FiberChannel and iSCSI LUNs) where EMC’s implementation does not.  Celerra has an advantage when working with files which contain high amounts of duplication in very small block sizes (like 50 bytes) since NetApp looks at 4KB chunks.  Celerra’s use of a more traditional compression engine saves more space in the syslog scenario but NetApp’s block level approach could save more space than Celerra when dealing with lots of large files.

The ability to work on traditional LUNs presents some interesting opportunities, especially in a VMWare/Hyper-V environment.  As I mentioned in my previous post, virtual environments have lots of redundant data since there are many systems potentially running the same operating system sharing the disk subsystem.  If you put 10 Windows virtual machines on the same LUN, de-duplication will likely save you tons of disk space on that LUN.  There are limitations to this that prevent the full benefits from being realized.  VMWare best practices require you to limit the number of virtual machine disks sharing the same SAN lun for performance reasons (VMFS locking issues) and A-SIS can only de-dupe data within a LUN but not across multiple LUNs.  So in a large environment your savings are limited.  NetApp’s recommendation is to use NFS NAS volumes for VMWare instead of FC or iSCSI LUNs because you can eliminate the VMFS locking issue and place many VMs on a single very large NFS volume which can then be de-duplicated.  Unfortunately there are limits on certain VMWare features when using NFS so this may not be an option for some applications or environments.  Specifically, VMWare Site Recovery Manager which coordinates site-to-site replication and failover of entire VMWare environments does not currently support NFS as of this writing.

When it comes to de-duplication’s impact on performance it’s kind of all over the map.  In backup applications, most target based systems either perform the work in memory while the data is coming in or as a post-process job that runs when the backups for that day have completed.  In either case, the hardware is designed for high throughput and performance is not really a problem.  For primary data, both EMC and NetApp’s implementations are post-process and do not generally impact write performance.  However, EMC has limitations on the size of files that can be de-duplicated before a modification to a compressed file causes a significant delay.  Since they also limit de-duplication to files that have not been accessed or modified for some period of time, the problem is minimal in most environments.  NetApp claims to have little performance impact to either read or writes when using A-SIS.  This has much to do with the architecture of the NetApp WAFL filesystem and how A-SIS interacts with it but it would take an entirely new post to describe how that all works.  Suffice it to say that NetApp A-SIS is useful in more situations than EMC’s Celerra De-duplication.

Where I do see potential problems with performance regardless of the vendor is in the same situation as thin provisioning.  If your application requires 1000 IOPS but you’ve only got 2 disks in the array because of the disk space savings from thin-provisioning and/or de-duplication, the application performance will suffer.  You still need to service the IOPS and each disk has a finite number of IOPS (100-200 generally for FC/SCSI).  Flash/SSD changes the situation dramatically however.

Right now I believe that de-duplication is extremely useful for backups, but not quite ready for prime-time when it comes to primary storage.  There are just too many caveats to make any general recommendations.  If you happen to purchase an EMC Celerra or NetApp FAS/IBM nSeries that supports de-duplication, make sure to read all the best-practices documentation from the vendor and make a decision on whether your environment can use de-duplication effectively, then experiment with it in a lab or dev/test environment.  It could save you tons of disk space and money or it could be more trouble than it’s worth.  The good thing is that it’s pretty much a free option from EMC and NetApp depending on the hardware you own/purchase and your maintenance agreements.

Capacity vs. Performance : Thin Provisioning

Posted on by

In my previous post, where I discussed the problem of unusable (or slack) disk space on a SAN, I promised a follow-up with techniques on how to increase storage utilization.  I realized that I should discuss some related technologies first and then follow that up with how to put it all together.  So today I start by talking about Thin Provisioning.  I will then follow up with an explanation of De-Duplication and finally talk about how to use multiple technologies together to get the most use out of your storage.

So what is Thin Provisioning?  It is a technology that allows you to create LUNs or Volumes on a storage device such that the LUN/Volume(s) appear to the host or client to be larger than they actually are.   In general, NAS clients and SAN attached hosts see “Thin Provisioned” LUNs just as they see any other LUN but the actual amount of disk space used on the storage device can be significantly smaller than the provisioned size.  How does this help increase storage utilization?  Well, with thin provisioning you provide applications with exactly the storage they want and/or need but you don’t have to purchase all of the disk capacity up front.

Let’s start with a comparison of using standard LUNs vs thin LUNs with a theoretical application set:

Say we have 3 servers, each running Windows Server.  The operating system partition is on local disk and application data drives are on SAN.  Each server runs an application that collects and stores data over time and the application owner expects that over the next year or so the data will grow to 1TB on each server.  In this particular case we also know that the application’s performance requirements are relatively low.

With traditional provisioning we might create 3 LUNs that are 1TB each and present them to the servers.  This provides the application with room for the expected growth.  Using 300GB FC disks we can carve out three 4+1 RAID5 sets, create one LUN in each and it would work fine.  Alternatively we could use wide striping (ie: a MetaLUN on EMC Clariion) and put all three LUNs on the same 15 disks.  Either way we’ve just burned 15 disks on the storage array based on uncertain future requirements.  If we were stingier with storage we could create smaller LUNs (500GB for example) and use LUN expansion technology to increase the size when the application data fills the disk to that capacity.

In the Thin Provisioning world we still create three 1TB LUNs but they would start out by taking no space.  The pool of disk that the LUNs get provisioned from doesn’t even need to have 3TB of capacity.  As the application data grows over the next 12 months or longer the pool size only needs to grow to accommodate the actual amount of data stored.  Depending on the storage array, we can add disks to the pool one at a time.  So on day one we start with 3 disks in the pool, and then add additional disks one by one throughout the year.  We can then create additional LUNs for other applications without adding disks.  As we add disks to the pool, we expand the capacity available for all of the LUNs to grow (up to each LUN’s maximum size) and we increase performance for ALL of the LUNs in the pool since we are adding spindles.  The real-world benefits come as we consolidate numerous LUNs into a single disk pool.

The nice thing about this approach is that we stop managing the size of individual LUNs and just manage the underlying disk pool as a whole.   And the cost-per-GB for SAN disk constantly goes down so we can spend only what we have to today, and when we add more later it will likely be a little cheaper.  Disk capacity utilization will be much higher in a thin model compared with the traditional/thick model.

The story gets even better in a virtual server environment such as with MS Hyper-V or VMWare ESX.  First, the virtual server OS drives are on the SAN in addition to the application data, and there can be multiple virtual disks on the same LUN.  Whether physical or virtual, we need to maintain some free space in the disks to keep applications running, plus with virtual systems we need some free space on the LUN for features of the virtualization technology like snapshots.  The net effect is that in a virtualized environment, disk utilization never gets much above 50% when slack space at both the virtual layer and inside the virtual servers is considered.  With thin provisioning we could potentially store twice the number of virtual servers on the same physical disks.

There are caveats of course.  Maintaining performance is the primary concern.  Whether used in a thick LUN or thin LUN, each disk has a specific amount of performance.   Thin provisioning has no effect on the amount of IOPS or bandwidth the application requires nor the amount of IOPS the physical disk can handle.  So even if thin provisioning saves 50% disk space in your environment, you may not be able to use all of that reclaimed space before running into performance bottlenecks.  If the storage array has QOS features (ie: EMC Clariion NQM) it is possible to prioritize the more important applications in your disk pool to maintain performance where it matters.

Other problems that you may encounter have to do with interoperability.  For starters, some applications are not “thin-friendly”; ie: they write data in such a way as to negate any benefit that thin provisioning provides.  Also, while many storage arrays support thin provisioning, each has different rules about the use of thin LUNs.  For example, in some scenarios you can’t replicate thin LUNs using native array tools.  It pays to do your homework before choosing a new storage array or implementing thin provisioning.

I didn’t cover thin provisioning in NAS environments directly but the feature works in the same manner.  Thin volumes are provisioned from pools of storage and users/clients see a large amount of available disk space even if the disk pool itself is very small.  Since NAS is traditionally used for user home directories and departmental shares, absolute performance is usually not as much of a concern so thin provisioning is much easier to implement and in many cases is the default behavior or simply a check box on NAS appliances like EMC Celerra or NetApp FAS.

Thin provisioning is a powerful technology when used where it makes sense.  In my next post I’ll explain de-duplication technology and then talk about how these technologies can be used together plus some workarounds for the caveats that I’ve mentioned.