Tag Archives: storage

Changes…

Posted on by

I’ve been absent from posting lately because there have been a lot of changes in my life.  Most notably I made a change in my career and have joined EMC Corporation as a Sr. Technical Consultant.

In this new role, I’ll be helping customers overcome challenges related to storing and managing information.

I’m not sure what my future topics will be other than to say they will still be storage related.  I expect that topics will come from the challenges my customers are facing and how they can be solved with today’s technology.

I promise to be as objective as possible despite my new employer being EMC, the corporate blogging policy is quite reasonable.

As before, the opinions expressed here are my own and not those of my employer or any other person or company.  All company and product names mentioned in this blog are copyrighted by their respective companies.

Capacity vs Performance: Thin Provisioning-Reclaiming Free Space

Posted on by

A comment about HDS’s Zero Page Reclaim on one of my previous posts got me thinking about the effectiveness of thin provisioning in general.  In that previous post, I talked about the trade-offs between increased storage utilization through the use of thin-provisioning and the potential performance problems associated with it.

There are intrinsic benefits that come with the use of thin provisioning.  First, new storage can be provisioned for applications without nearly as much planning.  Next, application owners get what they want, while storage admins can show they are utilizing the storage systems effectively.  Also, rather than managing the growth of data in individual applications, storage admins are able to manage the growth of data across the enterprise as a whole.

Thin provisioning can also provide performance benefits…  For example, consider a set of virtual Windows servers running across several LUNs contained in the same RAID group.  Each Windows VM stores its OS files in the first few GB of their respective VMDK files.  Each VMDK file is stored in order in each LUN, with some free space at the end.  In essence, we have a whole bunch of OS sections separated by gaps of no data.  If all VMs were booting at approximately the same time, the disk heads would have to move continuously across the entire disk, increasing disk latency.

Now take the same disks, configured as a thin pool, and create the same LUNs (as thin LUNs) and the same VMs.  Because thin-provisioning in general only writes data to the physical disks as it’s being written by the application, starting from the beginning of the disk, all of those Windows VMs’ OS files will be placed at the beginning of the disks.  This increased data locality will reduce IO latency across all of the VMs.  The effect is probably minor, but reduced disk latency translates to possibly higher IOPS from the same set of physical disks.  And the only change is the use of thin-provisioning.

So back to HDS Zero Page Reclaim.  The biggest problem with thin provisioning is that it doesn’t stay thin for long.  Windows NTFS, for example, is particularly NOT thin-friendly since it favors previously untouched disk space for new writes rather than overwriting deleted files.  This activity eventually causes a thin-LUN to grow to it’s maximum size over time, even though the actual amount of data stored in the LUN may not change.  And Windows isn’t the only one with the problem.  This means that thin provisioning may make provisioning easier, or possibly improve IO latency, but it might not actually save you any money on disk.  This is where HDS’s Zero Page Reclaim can help.  Hitachi’s Dynamic Provisioning (with ZPR) can scan a LUN for sections where all the bytes are zero and reclaim that space for other thin LUNs.  This is particularly useful for converting thick LUNs into thin LUNs.  But, it can only see blocks of zeros, and so it won’t necessarily see space freed up by deleting files.  Hitachi’s own documentation points out that many file systems are not-thin friendly, and ZPR won’t help with long-term growth of thin LUNs caused by actively writing and then deleting data.

Although there are ways to script the writing of zeros to free space on a server so that ZPI can reclaim that space, you would need to run that script on all of your servers, requiring a unique tool for each operating system in your environment.  The script would also have to run periodically, since the file system will grow again afterward.

NetApp’s SnapDrive tool for Windows can scan an NTFS file system, detect deleted files, then report the associated blocks back to the Filer to be added back to the aggregate for use by other volumes/LUNs.  The Space Reclamation scan can be run as needed, and I believe it can be scheduled; but, it appears to be Windows only.  Again, this will have to be done periodically.

But what if you could solve the problem across most or all of your systems, regardless of operating system, regardless of application, with real-time reclamation?  And what if you could simultaneously solve other problems?  Enter Symantec’s Storage Foundation with Thin-Reclamation API.  Storage Foundation consists of VxFS, VxVM, DMP, and some other tools that together provide dynamic grow/shrink, snapshots, replication, thin-friendly volume usage, and dynamic SAN multipathing across multiple operating systems.  Storage Foundation’s Thin-Reclamation API is to thin-provisioning what OST is to Backup Deduplication.  Storage vendors can now add near-real-time zero page reclaim for customers that are willing to deploy VxFS/VxVM on their servers.  For EMC customers, DMP can replace PowerPath, thereby offsetting the cost.

As far as I know, 3PAR is the first and only storage vendor to write to Symantec’s thin-API, which means they now have the most dynamic, non-disruptive, zero-page-reclaim feature set on the market.  As a storage engineer myself, I have often wondered if VxVM/VxFS could make management of application data storage in our diverse environment easier and more dynamic.  Adding Thin-Reclamation to the mix makes it even more attractive.  I’d like to see more storage vendors follow 3PAR’s lead and write to Symantec’s API.  I’d also like to see Symantec open up both OST and the Thin-Reclamation API for others to use, but I doubt that will happen.

Backup Everything – DR and Offsite copies

Posted on by

Okay, now that I’ve talked about backing up the datacenter with NetBackup and DataDomain, and backing up remote sites with NetBackup and PureDisk, it’s time to discuss how to get all that data offsite to protect against a catastrophic event at the datacenter.

As mentioned before we have a primary datacenter with the majority of our systems including the backup environment, and a secondary “disaster recovery” datacenter to which we replicate tier 1 applications for business continuity purposes.  Since we really wanted to get away from using tapes and instead store the backups on disk in our datacenter we have a second backup environment in the DR datacenter and we replicate the backup data there.

There are several ways to replicate backup data between two sites but most of them have drawbacks..

1.) Duplicate the backup data from disk to tape and ship the tapes to the remote site to be ready for restore.  This is the easiest and probably cheapest way.  But there’s that pesky tape yet again with it’s media handling and shipping.  And restore could take a while since you have to deal with restoring the catalog from tape, then importing the media, etc.

2.) Duplicate the backup data directly from the local disk to the disk in the second location across the WAN.  This is not very feasible with any significant amount of data because every byte of data that is backed up in the datacenter has to be copied across the much slower WAN.  It could take many days to duplicate a single nights’ backup.  You’d also need a special Catalog backup job that wrote to a storage device across the WAN.  The good here is that the backup application knows there is a second copy of the data and knows how to find it.

3.) Replicate the data with the backup storage devices’s native replication.  Whether it’s PureDisk, Avamar, or DataDomain, pretty much every source-based or target-based deduplication solution has replication built in that leverages the deduplication to reduce the amount of data that traverses the WAN.  The advantage here is that you can have a copy of all of your backup data in a second location in a much shorter time than a traditional copy process.  If your deduplication device stores the data with 10:1 compression, then your WAN usage is reduced by 90%.  The savings in practice is actually better than that.  The drawback is that the backup application (hence the catalog) has no knowledge that there is a second copy of the backup data and after recovering the catalog, you would need to import all of the disk-based media which could take a long time.

4.) Leverage NetBackup Lifecycle Policies with Symantec OpenSTorage (OST) and an OST-capable backup storage system like DataDomain or PureDisk(with PDDO).  Basically this has all the advantages of option #2, where it is a catalog-aware duplication, combined with the advantages of WAN bandwidth savings from option #3.  Time to copy the data offsite is much shorter due to deduplication, and time to restore is very fast since the data is already in the catalog and available on the disk.

OpenSTorage (OST) is a network protocol that Symantec developed to interface with disk-based backup storage systems and DataDomain was an early adopter of OST.  OST allows Netbackup to control replication between OST-capable storage systems and keep track of the replicated copies of backups in the Catalog just as if Netbackup had made both copies itself.  OST is also used as the protocol to send the backup data to the storage device as opposed to CIFS/NFS or VTL.  DataDomain appliances support OST as does PureDisk when used in conjunction with the PDDO option discussed earlier.   In NetBackup, replication controlled by OST is called “optimized duplication” and is controlled primarily through Lifecycle Policies.

Traditionally, when creating NetBackup job policies, the administrator will specify a Storage Unit (either a disk storage unit or a tape library or drive) that the job policy will send backups to.  Lifecycle Policies are treated like Storage Units as far as the Job Policy is concerned but the Lifecycle Policy includes a list of storage units, each with it’s own data retention, that the backup data must be stored onto in order for NetBackup to consider the data fully protected.  Typically there is a “Backup” target which is where the actual data coming from the client is stored, followed by one or more “Duplication” targets.  After the backup job completes, NetBackup will copy the backup data from the “backup” location to all of the “duplication” locations.  This works with pretty much any type of storage and you can mix and match tape and disk in the same policy.  Since these are duplication operations, NetBackup will read ALL of the data from the backup location, and write ALL of the data to each duplication location.  This can take a long time even on the local network and trying to offsite a lot of data over the WAN is not very feasible.

With OST, the lifecycle policy operates exactly the same except that it uses “optimized duplication”, instructing the storage device to copy the file rather than performing the copy through a media server.  So in the case of DataDomain, OST issues the command to the DDR, the DDR then copies the file to the second DDR in the remote site and gets all the benefits of deduplication and compression between the two.  The media server doesn’t actually do any work.  Once the duplication is complete, the DDR notifies NetBackup and the catalog is updated with a record of the second copy of the backup.  Lifecycle Policies are fully automated, you can’t even restart a failed duplication, so in the event of a transient failure like a WAN hiccup NetBackup will retry a duplication job forever until it succeeds in order to satisfy the lifecycle policy.

As you can probably surmise, this is REALLY nice for a tape-less backup environment.  Our DD690 offsites over 9TB of data every night DURING the backup window.  When the last backup job completes, the offsite copies are complete within 30 minutes.  And there is absolutely no management of the offsite process or duplication jobs besides configuring the lifecycle policies up front.  The drawback to regular Netbackup lifecycle policies is that all duplications are taken from the initial backup copy which limits what you can do with the copies.

Enter NetBackup 6.5.4…  Despite the small 6.5.3 -> 6.5.4 version number change, the 6.5.4 release had quite a few new features added.  The biggest one was a revamping of the Lifecycle Policy engine to allow for nested duplications.  Now you can create a copy of a backup, then create multiple copies from the copy, then create copies from the other copies.  Why is this useful?

Remember when I discussed using NetBackup with PDDO to backup remote sites?  Well the data backed up from the remote site is all stored in the primary datacenter and we need to get the second copy to the DR datacenter.  Plus, we wanted to have a small cache of recently backed up data sitting on the remote media server for fast restore.  Well, nested lifecycle’s are the key.  The lifecycle writes the initial backup copy onto the media server’s local disk which is configured as a capacity-managed staging area (ie: it stores as much as it can and expires data when it needs more space for new backups).  The lifecycle then creates a duplicate of the backup onto the PureDisk storage unit in the primary datacenter.  Since bandwidth to the remote site is very limited we don’t want to copy it from the remote site twice so the lifecycle has a second duplication nested under the first to copy it to the DR datacenter.  The source of the second copy is the primary datacenter copy, NOT the remote media server copy.

Where else can we use this?  Let’s consider our tape-less datacenter backups..  We backup the clients to the DataDomain in our primary datacenter, then using a lifecycle policy and OST, create a copy on the DataDomain in the DR datacenter.  If we also wanted to have a tape copy for long term archive or vaulting we could create a nested duplication to make a copy to a tape library in the DR datacenter from the disk copy that is also in the DR datacenter.  Without nested lifecycle’s the only workable solution would be to create the tape in the primary datacenter.  Every copy of the backup made via the lifecycle policy whether it is using OST or not is maintained by the catalog and easily used for restore.  Furthermore, using OST as the protocol between Netbackup and DataDomain actually increases throughput to the DataDomain DDR systems by approximately 2X vs VTL/CIFS/NFS.

Now to the caveats.. Optimized duplication via OST is only available when you are using OST as the protocol between the media server and the storage unit.  This means it doesn’t work with VTL even when the DataDomain IS the VTL.  OST only works over an ethernet network which is why we skipped VTL completely and used 10gbps networks for the DDR connections.  We even skipped VTL/Tape for the NAS systems, connected them directly to the 10gbps network and use 3-way NDMP to backup them up over the network, through the media servers, to the DataDomain.  We get the benefit of lifecycle policies, optimized duplication, and I may have mentioned before–no pesky tape even with NDMP/NAS backups.  And the interesting thing is that with the 10gbps connection, the NDMP dumps are faster than direct fiber to tape.

There were other enhancements to NetBackup 6.5.4 centered around OST functionality but the lifecycle policy improvements were huge in my opinion.

To cover the catalog replication, we run Netbackup hot catalog backups to a CIFS share that is hosted by the DataDomain.  The DDR replicates that share using DataDomain native replication to the DDR in the DR datacenter where the same data is available via a similar CIFS share.  Our standby Netbackup master server is already connected to the CIFS share for catalog restore and connected to the DDR via OST.  A single operation restores the catalog from the replicated copy.  In a real disaster we can begin restoring user data within 30 minutes from the DR datacenter.

Backup Everything – Netbackup does WAN

Posted on by

In a previous post I discussed the new backup environment I’ve been deploying, what solutions we picked, and how they apply to the datacenter.  But I also mentioned that we had remote sites with systems we need to back up but I didn’t explain how we addressed them.  Frankly, the previous post was getting long and backing up remote offices is tricky so it deserved it’s own discussion.

Now that we had Symantec NetBackup running in the datacenter, backup up the bulk of our systems to disk by way of DataDomain, we need to look at remote sites.  For this we deployed Symantec NetBackup PureDisk.  Despite the fact that it has NetBackup in the name, PureDisk is an entirely different product with it’s own servers, clients, and management interfaces.  There are some integration points that are not-obvious at first but become important later.  Essentially PureDisk is two solutions in a single product — 1:) a “source-dedupe” backup solution that can be deployed independent of any other solution, and 2:) a “target-dedupe” backup storage appliance specifically integrated with the core NetBackup product via an option called PDDO.

As previously discussed, backing up a remote site across a WAN is best accomplished with a source-dedupe solution like PureDisk or Avamar.  This is exactly what we intended to do.  Most of our remote site clients are some flavor of UNIX or Windows and installing PureDisk clients was easily accomplished.  Backup policies were created in PureDisk and a little over a day later we had the first full backup complete.  All subsequent nightly backups transfer very small amounts of data across the WAN because they are incremental backups AND because the PureDisk client deduplicates the data before sending it to the PureDisk server.  The downside to this is that the PureDisk jobs have to scheduled, managed, and monitored from the PureDisk interface, completely separate from the NetBackup administration console.  Backups are sent to the primary datacenter and stored on the local PureDisk server, then the backed up data is replicated to the PureDisk server in the DR datacenter using PureDisk native replication.  Restores can be run from either of the PureDisk servers but must un-deduplicate the data before sending across the WAN making restores much slower than backups.  This was a known issue and still meets our SLAs for these systems.

Our biggest hurdle with PureDisk was the client OS support.  Since we have a very diverse environment we ran into a couple clients which had operating systems that PureDisk does not support.  Both Netware and x86 versions of Solaris are currently not supported, both of which were running in our remote sites.

We had a few options:

1.) Use the standard NetBackup client at the remote site and push all of the data across the WAN

2.) Deploy a NetBackup media server in the remote site with a tape library and send the tapes offsite

3.) Deploy a NetBackup media server in the remote site with a small DataDomain appliance and replicate

4.) Deploy a NetBackup media server and ALSO use PureDisk via the PDDO option (PureDisk Deduplication Option)

Option 1 is not feasible for any serious amount of data, Option 2 requires a costly tape library and some level of media handling every day, and Option 3 just plain costs too much money for a small remote site.

Option 4, using PDDO, leverages PureDisk’s “target-dedupe” persona and ends up being a very elegant solution with several benefits.

PDDO is a plug-in that installs on a Netbackup media server.  The PDDO plug-in deduplicates data that is being backed up by that media server and sends it across the network to a PureDisk server for storage.  The beauty of this option is that we were able to put a Netbackup media server in our remote site without any tape or other storage.  The data is copied from the client to the media server over the LAN, de-duplicated by PDDO, then sent over the WAN to the datacenter’s PureDisk server.  We get the bandwidth and storage efficiencies of PureDisk while using standard NetBackup clients.  A byproduct of this is that you get these PureDisk benefits without having to manage the backups in PureDisk’s separate management console.  To reduce the effects of the WAN on the performance of the backup jobs themselves, and to make the majority of restores faster, we put some internal disk on the media server that the backup jobs write to first.  After the backup job completes to the local disk, NetBackup duplicates the backup data to the PureDisk storage server, then duplicates another copy to the DR datacenter.  This is all handled by NetBackup lifecycle policies which became about 1000X more powerful with the 6.5.4 release.  I’ll discuss the power of lifecycle policies, specifically with the 6.5.4 release, when I talk about OST later.

So the result of using PureDisk/PDDO/NetBackup together is a seamless solution, completely managed from within NetBackup, with all the client OS support the core NetBackup product has, the WAN efficiencies of source-dedupe, the storage efficiencies of target-dedupe, and the restore performance of local storage, but with very little storage in the remote site.

Remote Site Backup…  Done!!

For the near future, I’m considering putting NetBackup media servers with PDDO on VMWare in all of the remote sites so I can manage all of the backups in NetBackup without buying any new hardware at all.  This is not technically supported by Symantec but there is no tape/scsi involved so it should work fine.  Did I mention we wanted to avoid tape as much as possible?

Incidentally, despite my love for Avamar, I don’t believe they have anything like PDDO available in the Networker/Avamar integration and Avamar’s client OS support, while better than PureDisk’s, is still not quite as good as Netbackup and Networker.

Okay, so how does OST play into NetBackup, PureDisk, PDDO, and DataDomain?  What do the lifecycle policies have to do with it? And what is so damned special about lifecycle policies in NetBackup 6.5.4?  All that is next…

Capacity vs. Performance : Thin Provisioning

Posted on by

In my previous post, where I discussed the problem of unusable (or slack) disk space on a SAN, I promised a follow-up with techniques on how to increase storage utilization.  I realized that I should discuss some related technologies first and then follow that up with how to put it all together.  So today I start by talking about Thin Provisioning.  I will then follow up with an explanation of De-Duplication and finally talk about how to use multiple technologies together to get the most use out of your storage.

So what is Thin Provisioning?  It is a technology that allows you to create LUNs or Volumes on a storage device such that the LUN/Volume(s) appear to the host or client to be larger than they actually are.   In general, NAS clients and SAN attached hosts see “Thin Provisioned” LUNs just as they see any other LUN but the actual amount of disk space used on the storage device can be significantly smaller than the provisioned size.  How does this help increase storage utilization?  Well, with thin provisioning you provide applications with exactly the storage they want and/or need but you don’t have to purchase all of the disk capacity up front.

Let’s start with a comparison of using standard LUNs vs thin LUNs with a theoretical application set:

Say we have 3 servers, each running Windows Server.  The operating system partition is on local disk and application data drives are on SAN.  Each server runs an application that collects and stores data over time and the application owner expects that over the next year or so the data will grow to 1TB on each server.  In this particular case we also know that the application’s performance requirements are relatively low.

With traditional provisioning we might create 3 LUNs that are 1TB each and present them to the servers.  This provides the application with room for the expected growth.  Using 300GB FC disks we can carve out three 4+1 RAID5 sets, create one LUN in each and it would work fine.  Alternatively we could use wide striping (ie: a MetaLUN on EMC Clariion) and put all three LUNs on the same 15 disks.  Either way we’ve just burned 15 disks on the storage array based on uncertain future requirements.  If we were stingier with storage we could create smaller LUNs (500GB for example) and use LUN expansion technology to increase the size when the application data fills the disk to that capacity.

In the Thin Provisioning world we still create three 1TB LUNs but they would start out by taking no space.  The pool of disk that the LUNs get provisioned from doesn’t even need to have 3TB of capacity.  As the application data grows over the next 12 months or longer the pool size only needs to grow to accommodate the actual amount of data stored.  Depending on the storage array, we can add disks to the pool one at a time.  So on day one we start with 3 disks in the pool, and then add additional disks one by one throughout the year.  We can then create additional LUNs for other applications without adding disks.  As we add disks to the pool, we expand the capacity available for all of the LUNs to grow (up to each LUN’s maximum size) and we increase performance for ALL of the LUNs in the pool since we are adding spindles.  The real-world benefits come as we consolidate numerous LUNs into a single disk pool.

The nice thing about this approach is that we stop managing the size of individual LUNs and just manage the underlying disk pool as a whole.   And the cost-per-GB for SAN disk constantly goes down so we can spend only what we have to today, and when we add more later it will likely be a little cheaper.  Disk capacity utilization will be much higher in a thin model compared with the traditional/thick model.

The story gets even better in a virtual server environment such as with MS Hyper-V or VMWare ESX.  First, the virtual server OS drives are on the SAN in addition to the application data, and there can be multiple virtual disks on the same LUN.  Whether physical or virtual, we need to maintain some free space in the disks to keep applications running, plus with virtual systems we need some free space on the LUN for features of the virtualization technology like snapshots.  The net effect is that in a virtualized environment, disk utilization never gets much above 50% when slack space at both the virtual layer and inside the virtual servers is considered.  With thin provisioning we could potentially store twice the number of virtual servers on the same physical disks.

There are caveats of course.  Maintaining performance is the primary concern.  Whether used in a thick LUN or thin LUN, each disk has a specific amount of performance.   Thin provisioning has no effect on the amount of IOPS or bandwidth the application requires nor the amount of IOPS the physical disk can handle.  So even if thin provisioning saves 50% disk space in your environment, you may not be able to use all of that reclaimed space before running into performance bottlenecks.  If the storage array has QOS features (ie: EMC Clariion NQM) it is possible to prioritize the more important applications in your disk pool to maintain performance where it matters.

Other problems that you may encounter have to do with interoperability.  For starters, some applications are not “thin-friendly”; ie: they write data in such a way as to negate any benefit that thin provisioning provides.  Also, while many storage arrays support thin provisioning, each has different rules about the use of thin LUNs.  For example, in some scenarios you can’t replicate thin LUNs using native array tools.  It pays to do your homework before choosing a new storage array or implementing thin provisioning.

I didn’t cover thin provisioning in NAS environments directly but the feature works in the same manner.  Thin volumes are provisioned from pools of storage and users/clients see a large amount of available disk space even if the disk pool itself is very small.  Since NAS is traditionally used for user home directories and departmental shares, absolute performance is usually not as much of a concern so thin provisioning is much easier to implement and in many cases is the default behavior or simply a check box on NAS appliances like EMC Celerra or NetApp FAS.

Thin provisioning is a powerful technology when used where it makes sense.  In my next post I’ll explain de-duplication technology and then talk about how these technologies can be used together plus some workarounds for the caveats that I’ve mentioned.

Capacity vs. Performance : Why do I have so much free space on my SAN and why can’t I use it?

Posted on by

In the past–in the days of 2GB,4GB,9GB,18GB and even 36GB drives–when you were tasked with purchasing and configuring hard drives for an application, you were given the amount of storage space required for the application and that was pretty much good enough. If you or your company were more organized you’d do an analysis of the performance requirements for that application (ie: IOPS, read/write ratios, bandwidth, etc.) to make sure you had enough spindles to accommodate the application. More often than not, the capacity requirements necessitated more disk than the performance so you’d build your RAID group and fill it up all the way.

Fast forward a few years and 72GB drives are no longer available, 146GB drives are getting close to end-of-sale and there are 300, 400, 600GB drives, and terabyte SATA drives available for almost any storage system or server. The problem is that as these hard drives get bigger, they aren’t getting any faster. In fact, SATA drives are relatively new in the Enterprise space and are slower than traditional 10,000 and 15,000 RPM SCSI drives. But they hold terabytes of data. Today, performance is the primary requirement and capacity is second because in general you need more spindles for the performance of your application than you do to achieve the capacity requirement.

As an example, let’s take a 100GB SQL database that requires 800 IOPS at 50% Read/50% Write.

Back in the day with 18GB drives you’d need 12 disks to provide ~100GB of space in RAID10. Using SCSI-3 10K drives, you can expect about 140 IOPS per disk giving you 1680 IOPS available. Accounting for RAID10 write penalties, you’d have an effective 1100 IOPS, more than enough for your workload of 800 IOPS.

Today, a single 146GB 10K disk can provide all the capacity required for this database; but you still need at least 10 disks to achieve your 800 IOPS workload with RAID10, or 15 disks with RAID5. The capacity of a RAID10 group with ten 146GB drives is approximately 680GB, leaving you with 580GB of free (or slack) space in the RAID group. The trouble is that you can’t use that space for any of your other applications because the SQL database requires all of the performance available in that RAID group. Change it to RAID5, or use new larger disks, and it’s even worse. Switching to 15K RPM drives can help, but it’s only a 30% increase in performance.

If you are managing SAN storage for a large company, your management probably wants you to show them high disk capacity utilization on the SAN to help justify the cost of storage consolidation. But as the individual disk sizes get larger, it becomes increasingly difficult to keep the capacity utilization high, and for many companies it ends up dropping. Thin Provisioning and De-Duplication technologies are all the rage right now as storage companies push their wares, and customers everywhere are hoping that those buzzwords can somehow save them money on storage costs by increasing capacity utilization. But be aware, if you have slack space due to performance requirements, those technologies won’t do you any good and could hurt you. They are useful for certain types of applications, something I’ll discuss in a later post.

So what do you do? Well, there’s not a lot you can do except educate your management on the difference between sizing for performance and sizing for capacity. They should be aware that slack space is a byproduct of the ever increasing size of hard disk drives. Some vendors are selling high speed flash or SSD disks for their SAN storage systems which can be 30-50X faster than a 15K RPM drive and have similar capacities. But flash has a significant cost which only makes sense if you can leverage most of the IOPS available in each disk. In the next installment I’ll discuss tiered data techniques and how they can overcome some of these problems, increasing performance in some cases while also increasing utilization rates.