Now that we know we’ll be deploying about 562 VM’s per Building Block we can use the other metrics to determine the requirements for a single block.
- Since 562 VMs is about 12.5% of the 4500 total VMs, we then calculate 12.5% of the other metrics determined in the last post.
- 12.5% of 9000 vCPUs = 1125 vCPUs
- 12.5% of 4500GB RAM = 562GB RAM
- 12.5% of 225,000 IOPS = 28125 Host IOPS
- 12.5% of 562TB = 70TB Usable Disk capacity
First we’ll size the compute layer of the Building Block
- At 4:1 vCPUs per Physical CPU thread you’d want somewhere around 281 hardware threads per Building Block. Using 4-socket, 8-core servers (32 cores per server) you’d need about 9 physical servers per building block. The number of vCPUs per physical CPU thread affects the % CPU Ready time in VMWare vSphere/ESX environments.
- For 562GB of total RAM per Building Block, each server needs about 64GB of RAM
- Per standard best practices, a highly available server needs two HBAs, more than two can be advantageous with high IOPS loads.
Next, we’ll calculate the storage layer of the Building Block
- Assuming no cache hits, the backend disk load for 28,125 Host IOPS @ 50:50 read/write looks like the following:
- RAID10 : 28125/2 + 28125/2*2 = 42187 Disk IOPS
- RAID5 : 28125/2 + 28125/2*4 = 70312 Disk IOPS
- RAID6 : 28125/2 + 28125/2*6 = 98437 Disk IOPS
- If you calculate the number of disks required to meet the 70TB Usable in each RAID level, and the # of disks needed for both 10K RPM and 15K RPM disks to meet the IOPS for each RAID level, you’ll eventually find that for this specific example, using EMC Best Practices, 600GB 10K RPM SAS disks in RAID10 provides the least cost option (317 disks including hot spares). Since 10K RPM disks are also available in 2.5” sizes for some storage systems, this also provides the most compact solution in many cases (29 Rack Units for an EMC VNX storage array that has this configuration). In reality this is a very conservative configuration that ignores the benefits of storage array caching technologies and any other optimizations available, it’s essentially a worst case scenario and it would be beneficial to work with your storage vendor’s performance group to perform a more intelligent modeling of your workload.
- Finally, you’ll need to select a storage array model that meets the requirements. Within EMC’s portfolio, 317 disks necessitate an EMC VNX5700 which will also have more than enough CPU horsepower to handle the 28125 host IOPS requirement.
At this point you’ve determined the basic requirements for a single Building Block which you can use as a starting point to work with your vendors for further tuning and pricing. Your vendors may also propose various optimizations that can help save you money and/or improve performance such as block-level tiering or extended SSD/Flash based caching.
Example bill-of-materials (BOM):
- 9 x Quad-CPU/8-Core servers w/64GB RAM each
- 2 x Single port FibreChannel HBAs
- 1 x EMC VNX5700 Storage Array with 317 x 300GB 2.5” 10K SAS disks
Wait, where’s the fabric?