Windows Performance Improvement Basics

How to improve the performance of your Windows server.  In our previous article, Quick Windows Performance Analysis, we discussed how to identify resource bottlenecks in your Windows server. So now that you've discovered a performance bottleneck, what do you do now?  Well to answer that, let's discuss the choices you made (or should have made) when you built the server, and we'll point out what changes you can make, short of replacing the whole server.

As we said in the previous article, the four basic resources that affect server performance are CPU, RAM, Network and Disk. 

CPU Choices
When building a new server, the manufacturer usually offers a number of processor sockets (usually two or four), and choice of CPU's that vary in the number of processor cores, speed in GHz, and amount of internal Cache.  Certainly all of these things matter, to a varying degree, depending on what the server is used for.  A file server will not use much CPU power, so there it's usually a waste of money to spend extra for the hottest processors, but a virtualization host running VMware ESX or Windows 2008 Hyper-V could benefit from the extra power.

When building a server that needs a lot of CPU power, choosing a server with lots of processor cores is great, but keep in mind that processor cores are not quite as good as physical processors.  Processor cores on the same physical processor must share the chip's memory bus (it's path to the memory on the mother board), so each core must wait its turn to move data to and from memory.  The effect of this bottleneck is reduced by maximizing the internal cache of the processor.  So, buy the processors with the biggest internal cache for CPU intensive servers.  That said, building a server with four dual-core processors will be better than a server with two quad-core processors.  They both have eight cores, but server with four physical processors can move data in and out of the processors much faster than the server with two processors.

It's probably not likely that you'll be doing processor upgrades on an existing server, but if you've just built a server recently, and it's not performing well enough, maybe you could buy another one with better processors, move the disks over, and use the slower one for something else that uses less CPU. 

Memory
If your server doesn't have enough memory, performance will probably be very bad.  The server will have to work very hard to free up chunks of memory to continue its job.  It does this by moving chucks of data in memory to disk (swapping to the page file), and by tearing down network connection buffers and clearing out cached data, using up additional CPU power.  Since disks are the slowest component in the server, swapping can just kill performance.  So if your server is low on memory, adding memory can improve performance dramatically.

Even if your server isn't low on memory, adding some extra RAM may be a good idea.  Windows can use the extra memory for large system cache, which file servers and web servers use to keep recently requested files and web pages in memory, so that they can be retrieved from memory (instead of from the disk) if they're requested again.  However, this only helps performance if the server does indeed receive requests for the same files repetitively.

Remember that the server will only use so much memory.  32 bit operating systems are limited to 2GB of memory per process, and the 32 bit Windows kernel won't use more than 2GB, so 4GB is usually the most RAM that a 32 bit Windows server will use effectively.  32 bit Windows 2003 standard edition won't even see more than 4GB of RAM.  Enterprise edition will see more, but each process is still limited to 2GB of RAM, so unless you're running multiple processes (like multiple SQL instances), extra RAM won't do a bit of good.

Network
Most servers these days come with two gigabit network ports.  Use them!  The manufacturer will likely provide NIC teaming drivers.  This allows you to configure the two NICs as a team.  The NIC team is a virtual NIC, with one IP address and one MAC address.  The team can be configured for load balancing and automatic fail over.  Load balancing allows user traffic to be distributed across the two network connections, thus increasing the effective throughput to 2Gb/s (each individual user connection is still limited to 1Gb/sec, but overall you'll have twice the bandwidth).  So by all means, do implement NIC teaming.  It's free!

The other benefit of NIC teaming, even if your network traffic isn't that high, is that multiple processors can service the interrupts from the NICs.  Usually, just one of your processors is servicing the single NIC.  You can see this in performance monitor or task manager on a system with one busy NIC.  One processor will be running higher than the others.  Teaming the NICs will spread the load across more processors.

NIC Statistics showing number of CRC and Buffer Errors
It's worth taking a look at the statistics on your NICs as well.  Your NICs should be experiencing no errors.  If you see errors, have a closer look.  Two types of errors are common, CRC errors, and out-of-receive-buffer errors.  If you see CRC errors, there's probably a duplex mismatch between the server and the network switch.  This is especially true if you're running at 100mb/sec.  If your NIC speed is set to autonegotiate, and you see CRC errors, try setting the NIC speed to 100/Full.

If you see out-of-receive-buffer errors, that mean the NIC is getting overloaded with inbound data, doesn't have enough buffer memory to hold it all, and has to drop some incoming packets.  If you see any errors, try increasing the number of receive buffers.  Don't increase it too much, since it uses more memory, just increase it a bit and keep an eye on things to see if you need to raise it any further.

Disk
OK, here's where the money is.  Like we've said, disks are the slowest part of the server.  They're mechanical rattle traps.  In most cases, when I analyze performance problems, I find that the disks are the bottleneck.  So what can you do to improve disk performance?

First, there are several types of disks, including SATA, SAS, and Fibre Channel (or SCSI in older servers), which refer to the type of electronic interface built into the disk drive.  Fibre Channel drives are usually reserved for SAN storage arrays and high-end UNIX servers.  Internal disks in Windows servers are typically SATA or SAS these days.  If you're building a server with internal storage that requires good performance, use SAS disks.  These generally perform better than SATA disks.

Definitely spring for the array controller.  This will allow you to create an array of multiple disks, allowing you to spread disk writes across the disks, which improves performance significantly.  Also, array controllers often provide cache memory that dramatically improves performance.

Disk cache on the array controller, specifically write cache, requires a battery (unless it is flash memory) so that data can be safely stored in memory for a while before it is written to disk.  If the server loses power, the battery keeps the data in the cache safe until power is restored and the data can be flushed to the disk.  So, buy the battery pack or flash cache for the array controller, and while you're at it, buy the biggest cache that you can.  And remember that if your server's performance suddenly gets worse, check the battery!  A dead cache battery will disable the write cache, dramatically slowing down disk writes.

When building the disk array, more disks are better than less.  If you need 200GB of space, and you have a choice to build it with two large drives or four small drives, use four small drives.  Spreading the writes across more disks improves performance because each disk can write a chunk of the data simultaneously with the other disks, and each disk head needs to move less since it's only writing part of the data.  Of course, this is an oversimplification.  Disk layout demands it's own article, if not its own library, but in general, more disks means better performance.

SAN Storage
So far, I've only discussed internal disks.  While some of the same rules apply, there are a few more choices to make when building SAN storage.  First, you'll have a choice of Fibre Channel or iSCSI attached storage.  In general, Fibre Channel has been considered the more expensive but higher-performing choice of the two.  iSCSI is typically limited to 1Gb/sec connection speeds (although 10Gb/sec is available), while Fibre Channel connection speeds are 4Gb/sec or 8Gb/sec.

Two Fibre Channel HBA's are required to provide redundant connections between the server and its SAN storage, and the same is true for iSCSI, though iSCSI uses Ethernet instead of Fibre Channel.  iSCSI can use standard NICs, or iSCSI HBA's.  The difference being that iSCSI HBA's have a chip on board that handles the translation between SCSI and Ethernet.  If a standard NIC is used, the CPU must perform the translation.  So if you're using standard NIC's for iSCSI and your CPU utilization is high, you might consider buying iSCSI HBA's to improve performance.

SAN storage arrays will also have cache memory and batteries as described above.  If you have the option, increase the size of the cache, and keep an eye on those batteries.

I have an urge to delve into disk layout at this point, but I think I'll save that for another article.

0 comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...