The process of delivering new physical servers had become so slow and painful, that it took three to six months to get a server into production. We had built a virtual lab environment to support about 40 VMs, which quickly became very popular because we could deploy a new server in minutes instead of months. It was only a matter of time before we would finally build a full scale virtualization platform for production. Well, the time is now. We're doing it.
To get started, we sat down with the inventory of servers, which contains the server names, operating system versions, and the primary role or application installed. I guess that's useful information, but what we really needed to know big our servers are. How many CPU cores are out there? How much RAM? How much data? To build a virtual platform that will house all of the workloads that are running on our physical servers, we really need to be able to determine how much disk space is in use, how much RAM is in use, and how much processing power is in use. Time for some scripts. You didn't think we'd get through this article without a Perl script or two, did you?
Collecting CPU, RAM, and disk information is not only helpful in sizing the cloud, it's also a dramatic view into how much waste there is in your physical server environment. I found that about two thirds of the RAM and disk space on my physical servers was unused. What a waste of money. And we still have capacity problems on many critical servers, with all that wasted disk space lying around.
Sizing the Storage Requirement
This first script takes a list of servers from a text file, and uses WMI queries to get the total disk size, space used, and space free, in gigabytes, for each server, and writes it to the output file. You can then open the file in Excel, and have a look at the results.
If you don't already have a list of servers, you could make one by searching Active Directory for a list of servers. After running our disk space script, I found that we had an absolutely enormous amount of wasted space. I mean on the order of hundreds of Terabytes. Sickening. For me, this was the determining factor in my choice to go with thin provisioning. I would normally shy away from it due to the potential pitfalls, but if it will save me hundreds of Terabytes, I'm sold.
So, with thin provisioning, I can total up my disk space used, add some percentage for growth, and end up with my storage sizing target for the new cloud. Wow, that was one heck of a Perl script. I was never that much of a WMI fan, but it came through this time.
CPU and Memory RequirementsNext, I needed the number of CPU cores and amount of RAM in use on the physical servers. Again, I turned to Perl and WMI. This next script collects the RAM in use, RAM installed, and the number of CPU cores on each server in our text file.
Again, we can open the output in Excel and see the results. Again, I found a tremendous amount of wasted RAM, terabytes wasted, terabytes of RAM! Totalling up the RAM in use (not installed), and adding a healthy percentage for growth, I arrive at the amount of RAM the new cloud must have.
Also in the output is the number of CPU cores per server. This is the number installed, we didn't get a feel for the amount of CPU in use. This is a much more difficult thing to pin down, because of the spiky graph a CPU makes throughout the day, we'd have to monitor the CPU's over time to discover what the average utilization is. Alternatively, you can accept what the industry says, that overall, the CPU's in your servers are probably running at around 10-15% of capacity. You may disagree with that, and you may be right, if all you have is a small number of servers with heavy workloads. In my case, I've got thousands of servers with all kinds of work loads, most of them light to moderate. So I tend to agree with the 15% figure.
Whatever percentage you accept or how you arrive at it, you can then take your output files and do the math. For example, if your physical server farm has 1000 processor cores running, and your virtualization factor is 15%, then theoretically you can build a cloud that contains 150 processor cores (plus some more for fault tolerance and growth).
So my sizing exercise started to come together. Using the 15% virtualization factor for the processors, and the number of processor cores, RAM in use, and disk space in use from my scripts, I was able to figure out how big my cloud needed to be.
Stay tuned as we continue with the design. We're busy choosing storage, server hardware, virtualization software, management tools, then we'll build the cloud and start migrating. Lots to do!