HP Blades and Converged Infrastructure Design for vSphere 5.0

Designing a VMware vSphere 5.0 infrastructure using HP blade servers and converged network infrastructure... So you're planning to build a large virtualization solution, and you think, big servers = big consolidation.  Surely you can cram lots of VMs into big, enterprise class servers, like the HP DL580/585.  It's got four sockets, each of which can contain 6, 8, or even 12 cores.  They hold lots of RAM, and they've got plenty of slots to put network cards and storage adapters.  They take up 4U of rack space, so you could squeeze ten of them into a 42U rack if you really tried.  So that's 40 processors cranking away hosting your VMs.  Not bad, but with blades, we could get 96 processors into a rack and still have 12U to spare.

When designing servers to run VMware, you need to think about the number of network interface cards and storage connections that you'll need to make.  Often, you'll want separate network connections for VM traffic, management and backup traffic, cluster traffic (vMotion, FT, etc), and you want those connections to be redundant, so you could end up with six NICs or more.  Now imagine you have a stack of ten DL580's in a rack with six network cables coming out of each (plus two storage connections, and power cords, and kvm cables).  That's 60 network cables, 20 storage connections, 20 power cords, 10 kvm cables... don't get tangled up, they'll never find you!

This is where "converged networking" comes in.  You can install several types of interconnect modules into the blade enclosures to consolidate your connectivity to your network and storage, reducing the cabling mess tremendously.  The result is that you can build massive computing power for your vSphere environment in a tight and tidy design.

The Blade Enclosure
The HP c7000 blade enclosure is takes up 10U of rack space, and has eight server bays into which you can install eight full-height four-processor blade servers or sixteen half-height two-processor blade servers (or a combination of each).  In the rear, there are eight interconnect bays into which you can install various network and storage switches or pass-through modules.  The enclosure can be fitted with up to six power supply modules and ten fan modules, and there are models that support single or three-phase power.

The enclosure has a management module known as the on-board administrator, that allows you to connect to the enclosure using a web browser.  From this interface, you can manage the configuration of the blade servers and interconnect modules.  You can connect to each blade's console via the browser and watch it boot as if you were sitting at it's monitor and keyboard, which is handy since the whole blade system is completely headless.  No monitor keyboard or mouse.  No nothing.

Blade Servers
Each blade is just packed with stuff.  In a very small case, they manage to pack two or four processors, lots of memory slots, room for two internal disks, an array controller, built-in network ports, and several spots for add-on cards (called mezzanine adapters) for storage connectivity and more networking.

The BL465C is a half-height blade that has two slots for AMD processors which can be up to 12 core (that's 24 cores total), and it can hold 256GB of RAM.  And, you can fix sixteen of these little monsters into the enclosure.  The BL685C is the full height version that holds four 12 core processors and 512GB of RAM.  Eight of these can fit in the enclosure.  If you're doing the math, you can see that building an enclosure full of either model, you wind up with the same amount of CPU and memory.

The difference will be in the number of mezzanine adapters you'll need to buy to connect to network and storage.  You'll need twice as many for the half-height blades.  Maybe the extra cost is justifiable if you think you need all the extra throughput that 2x the fibre connections will provide.  That's a design decision that will take some thinking through.  Also, the full-height blades have more slots for mezzanine adapters, which allows you to have more network and storage connections to each blade.  But before you think about that, we need to talk about the interconnect modules.

Interconnect Modules
To connect your blades to your network and storage, you install interconnect modules into the bays in the back of the blade enclosure, and install the appropriate mezzanine adapters into the blades.  Typically, you will install two Ethernet interconnect modules into the first two bays, and you may install fibre-channel modules into the next two bays (if you've got a fibre-channel SAN).

For Ethernet connectivity, the latest converged networking solutions for the blade enclosure are the HP Flex-10 and Flex-Fabric switches.  These switches provide multiple 10 Gb uplinks to your network over copper or fiber, and they present the blade servers (assuming they have flex adapters in them, which the BL465 and 685 do) with four network connections per switch (assuming you have two switches, the blade server sees eight NICs).  That's plenty of NICs for VMware. The difference between the Flex-10 and Flex-Fabric switches is that the Flex-Fabric does hardware iSCSI offload where the Flex-10 does not (it will pass through software iSCSI).

The uplinks can be aggregated together to provide multiples of 10Gb throughput, or they can be connected to different networks.  And they can be configured to carry VLAN-tagged traffic to carry multiple IP networks,  The four connections that each switch presents to the blade server provide a combined throughput of 10Gb (x2 since there are two flex NICs).  The 10Gb can be divided up across the four connections any way you like.  This is great for VMware.  It means that you can designate a specific throughput for VMs, vMotion, management traffic, etc.

Virtual Connect
The Flex switches are a class of switch that support Virtual Connect.  There are also Fibre-Channel Virtual Connect switches.  Virtual Connect allows you to configure all of the blade's connectivity at the switch, and virtualize the blade's MAC address and WWID.  Within the switches, you define the network and SAN connections in a server profile.  You then assign the profile to a server bay.  The blade in that bay then becomes connected to the network and storage connections that you defined in the profile.  If a blade fails, you can assign the profile to a spare blade, and bada-bing, it's connected to all the right networks and storage.  If you combine this with boot-from-SAN, the spare blade will boot up just like the old one.

By default, Virtual Connect is managed through a web browser that you point at one of the switches (they're all managed as one entity).  However, you can also manage them through HP's enterprise management product called Virtual Connect Enterprise Manager.  This tool allows you to manage one or multiple blade enclosures from a single interface.  It also allows you to designate spare blades and perform automated fail overs.

Solid-State Storage
SSD technology is now available for the blade servers.  If you intend to use local storage to boot the blades, you can use SSD drives (as well as SAS or SATA).  HP also has a product called an IO Accelerator which is SSD storage directly attached to the PCI bus.  This offers amazing storage performance.  It's not entirely clear how this technology may fit into your vSphere design.  If you plan to boot from local storage, SSD drives may provide great performance for host/VM swap.  HP doesn't recommend using the IO Accelerator for host swap, although I'm not 100% sure why that is.  I'm still researching that.  ESXi 5.0 includes a setting called host cache configuration, which seems to indicate that it can use local SSD storage for performance improvement, but I need to her from HP how exactly the IO Accelerator fits into the picture.

Related Posts:

1 comments:

Anonymous said...

You forgot to mention that there are more than just VirtualConnect options for Interconnect Modules on the chassis. Passthrough's are available for both Ethernet and FibreChannel. Cisco also has a Nexus interconnect modules for the HP C-Class chassis, which is basically a Nexus 2K FEX.

If you are running FCoE, most people tend to go for Passthrough or the Cisco Option (If running Cisco Nexus in the datacenter) as it allows a higher level of control for the converged networking, albeit slightly more cabling is required.

Post a Comment

Related Posts Plugin for WordPress, Blogger...