Often, companies are driven towards virtualization simply because they run out of space, power and cooling in their data centers. When things get tight, they are often faced with a large expenditure to construct more raised floor space, or upgrade power and cooling systems. Virtualization suddenly becomes an economically attractive alternative (as if it wasn't already). Once their virtualization platform is in place, they are free to resume their drunken server sprawl spree, only now in virtual form.
What companies seem to forget is that virtual machines still cost money, There are soft costs related to each instance of an operating system, whether it is running on a physical server or in a VM. The VM may not directly show up as a capital expense, but the OS license, license costs for any other software installed, and the cost to manage the instance all contribute to operating expenses. So, although virtualization allows you to easily deploy more and more OS instances, this isn't something that you really want to do, is it?
Now don't get me wrong, I'm a huge fan of virtualization. You absolutely should virtualize everything possible. A VMware DRS/HA cluster is a great platform on which to deploy your OS instances, allowing them to be protected from server failure, spread evenly across a substantially smaller number of servers.
Recently, all of the big players in the virtualization space have come out with their cloud management products. They are making it easier and easier to deploy a VM, or multiple VMs in as an application stack, allowing you to deploy more and more copies of Windows and Linux. Sounds great, you can deploy LAMP stacks (Linux, Apache, MySQL, and PHP), or stacks of Windows, IIS, MSSQL, and .Net, all day long with a single click. My question is, why the heck would you want to do that?
What people seem to forget is that a single instance of Apache, IIS, MySQL, MSSQL, Oracle, Weblogic and the like, can host multiple applications or databases, and so they are workload consolidation platforms on their own. Why deploy ten instances of IIS with one web application on each, when you can deploy ten applications on one IIS instance? That's nine copies of Windows that you don't have to pay for, along with all the other software and management costs. IIS comes free with the OS, so maybe that wasn't the best example. SQL, Oracle and Weblogic on the other hand, cost a lot, so why would you build ten of those if you don't have to?
Of course, I understand that you may still need multiple instances for test, development, pre-production, and production, and it's true that not all applications can be run together due to compatibility problems. Still, the goal should be to consolidate your apps and databases onto as few instances as possible.
Once you've built your virtualization platform, your next goal should be to consolidate your applications and databases onto as few VMs as possible. That's where the next wave of big savings can be found. This kind of consolidation takes work. The migration of applications and databases from one server to another is no easy task, but I think it's where good technology people should be spending their time.
One thing to think about: when app consolidation takes place, this leads to a smaller number of larger VMs. This makes it a bit more difficult to spread these large VMs across a virtualization farm, especially if you use smaller (two socket) virtualization hosts. Larger VMs means that a scale-up model (a smaller number of larger four socket hosts), is probably better than a scale-out model (a larger number of smaller two socket hosts). The scale-up model will make it easier to spread large VMs across your virtualization farm. If you haven't built your virtualization farm yet, then you should think about your scaling model before you decide on what servers to buy.