The virtualization platform will of course consist of clusters of servers attached to large SAN storage arrays. These arrays are highly resilient and well managed, designed to survive the loss of individual disks or even entire shelves of disks, and when things start to go wrong, they talk back to the manufacturer over the Internet.
Traditionally, your servers would have a RAID controller and probably two internal mirrored disks installed, and that's where you would install the operating system. You would then connect to the SAN via a fibre channel HBA and have your big data drive out on the SAN. The RAID controller typically has some cache memory to improve performance, and some of that is often configured as write-back cache to improve write performance. Write-back cache requires protection from power failure, so the RAID controller often includes a battery to protect the cache until it can be flushed when the power comes back on (some newer controllers now come with flash memory cache so that no battery is required).
Anyway, in our traditional server, in order to boot the OS, we've got two disks, a RAID controller, maybe a battery... as well as an HBA connecting us to our SAN storage. Now imagine you're designing new servers, racks of them, rows of them, rows of racks of servers, all with RAID controllers and two disks each? Why?
How about we don't buy any of that stuff, and instead, simply boot from SAN? First you save quite a bit of money not having to pay for all those RAID controllers and local disks, and if a server burns up, you simply replace the server, and your OS is safely sitting on the SAN. Your server becomes a stateless compute resource, just a brick of computing power.
OK you might say that now all your servers have to boot from SAN, causing a lot of extra disk IO pounding the SAN storage. True, but in a highly virtualized environment, all your Windows and Linux instances are on the SAN anyway, so it's not that much more IO to boot your hosts from the SAN.
Booting from SAN is not very complicated. You simply carve out a LUN for the OS and present it to the host. Then you go into the HBA's BIOS configuration during boot up, enable the BIOS and choose the LUN as the boot device. After that you simply install the OS as usual.
It's a no-brainer. No more local disks, stateless servers, and well protected, well managed SAN storage on the back end.