Someone once said you can never be too rich or too thin. Too rich is not in my future, so I thought I would talk about too thin this month thin servers, that is. Blade servers, which are essentially servers on a card, are starting to garner some interest and sales. I saw a research report (from Gartner) that predicts blade-server sales would jump from 84,000 units in 2002 to over one million in 2006. OK, I know I gave them a double yawn last month, but I suspect that thin is here to stay. I mean most servers are built on a desktop hardware model. That PC paradigm has been around for over 20 years. It is based on an open extensible architecture that allows a system to be extensively modified to suit any particular user or purpose. Isnt it time we started looking at building servers that are optimized to do only what they are required to do?

What Are They?

Blade servers really are servers on a card. A blade server consists of a chassis that is capable of hosting multiple blades. Each blade is a fully functional computer system on a mother board with a processor, memory, network connections, and usually some sort of local disk storage. The blades plug into a chassis that provides the necessary infrastructure, such as power supplies, cooling fans, cabling, networking, video, and keyboard interfaces. The chassis typically is designed for standard mounting racks with a form factor of 3U (5.25) or more. (The term form factor is a generic computer term used to define the physical shape and size of a device.) The first obvious advantage to these things is their small footprint. It is possible to mount a half-dozen servers (six blades and a chassis) in the space often used for a single standalone rack server. There is a wealth of benefit to be derived from this model. Right now I use a minimum of seven cables for each working server (two power, video, keyboard, mouse, two Ethernet, etc.). Imagine running six servers with the same number of cables! Still, there is a lot more to this picture than cables. Lets take a look.

In late 1999 and early 2000, a number of start-ups began_promoting and manufacturing blade servers. RLX Technologies (http://www.rlx.com) specializes in blade-server systems and appears to have become a player in the blade-server marketplace. In 2000, it designed a server made to house 24 blades in 3U form factor chassis. Its business partners include IBM, Intel, Pemstar, and Transmeta. Another survivor is Egenera (http://www.egenera.com), which was founded in March 2000. Egeneras focus is on blades for data centers using Intel processors and Unix (Linux) operating systems. It has partnered with BEA Systems, Intel, Oracle, Red Hat, and Veritas. The early success of these two prompted the big boys to jump on the blade wagon. Currently Dell, IBM, Hewlett-Packard, and Sun manufacture blade-server solutions. I find it interesting and encouraging that there is still room for innovative technology advances from start-upsyou dont necessarily require a multimillion-dollar research budget to come up with the next big thing.

Why a Blade?

The concept sounds greatmaking electronic gear smaller generally is considered a good thing, but what are the real advantages of blade servers?

Save space. Data-center real estate is at a premium, and blade technology certainly allows for optimizing use of that space. I was visiting a major insurance company a few years ago and was surprised to see large unwieldy servers (production servers) sitting in a passageway in the IT department cube farm. (Theres some good security for you.) Was this just a matter of convenience, or was the server room already at capacity? I dont know, but I suspect the latter. Rack-mounted blade systems can increase server density by a factor of two or more. The low end is assuming you replace 1U servers with a 3U blade system containing eight blades. In the real world, server density will increase by a factor of six or more. When blades first hit the marketplace, the dot-com boom was peaking and the pundits assumed there would be an exponential growth in server need. Reality has replaced that thought, but small systems are a nifty feature.

Reduced power consumption. This makes some sense. Reducing component size has forced the manufacturers to make each component more energy efficient. Intel had a line of low-voltage, low-power processors optimized for blades. The miniature hard drives on a blade consume less energy than a standard drive. Each standalone server requires individual power supplies and cooling fans. Blade technology permits many servers to share the same resources.

Directly related to the power consumption is the reduced heat generated by a bank of blade servers. Have you spent time in your server room when the air conditioning is out? If so, you know how the sauna was invented.

Reduced hardware and maintenance costs. Just as the density factor increases, the hardware costs decrease. The cost of each blade is significantly less than a standalone server. The initial round of savings comes from shared hardware resources on the blade chassis. Shared components include power supplies, network switches, KVM switches, video cards, remote management cards, peripheral device cards, and so on. Reduced maintenance costs also should arise. The blades themselves have fewer components to go bad than a traditional server box. That means a greater mean time between component failure. Addi-tionally, the chassis can be manufactured with higher-grade components than might be used in a standalone server, since the cost will be spread over multiple systems.

Easier deployment and administration. Adding a server can be as simple as plugging a new blade in and dropping an image on it. No new cables need to be run, and no new hardware needs to be bolted into the rack. (Have you ever plugged the wrong Ethernet cable into a server or a switch?) Replacing failed servers or adding on to your server farm is likewise much easier. Failover servers for your entire Web system can be ready to go in minutes using a blade system.

Whats the Catch?

Exactlythese things sound great, but so did the Newton (a handheld computer released by Apple in 1993). So just what are the limitations of a blade server?

WYSIWYGthats not a mistake. What you see is what you get. You cant upgrade a blade. These cute little servers on a card allow for no expansion or field modifications (at present). Think laptopyou cant really change the configuration of your lug-around computer.

Limited storage. There currently are available blades capable of handling two SCSI high-speed drives with a total capacity of about 150 GB. This is plenty of storage for traditional data processing and Web serving but may fall short for high-end database users. A limited number of interfaces are available for a single blade, so you cannot add drives using traditional technology. Interestingly, Egenera is marketing its precuts in the very teeth of these limitations. Its line of blade servers is targeted at high-end data-center operations.

Lack of horsepower. We talked about special low-energy processors designed for blades. Low energy also means lower processor speeds. Current blade-optimized processors are slower than state of the art. Dual Pentium IIIs at 1.4 GHz are offered on the Dell Blade Poweredge 1655MC. Standalone Dell servers are available with 2.4GHz Xeon chips. Blades with top-end processors are available, but they arent cheap.

No hardware standards. If you buy a particular manufacturers blade chassis, you almost certainly will need to use its blades. The only current standard for blade compatibility is CompactPCI, which was introduced in 1994 and is an industrial bus based on the standard PCI specification. Unfortunately, it does not scale to high-performance machines and is used almost exclusively for telecommunications and industrial applications. None of the latest generation of blade servers is based on this standard. The PCI Industrial Computer Manufacturers Group (http://www.picmg.org/) is working on a new standard that may be approved this year. That certainly doesnt mean the current manufacturers will line up behind the new standard, but one can hope.

The unknown. Efficiently managing a rack of blade servers will require new management tools and paradigms. Reducing the number of Ethernet cables means switches probably will need to be provided in the chassis. You cant just dump a rack of servers onto a single cable and let the collisions begin.

Cutting edge is not always good. Blade computing still is relatively new (even in Internet timewhich has become slower). The major hardware manufacturers just jumped into the fray in 2002, so this is still first-generation technology. Moving to blade technology could be a little dicey until there are agreed-upon standards.

So Whats the Scoop?

To blade or not to blade? There are major players successfully using blade systems. Credit Suisse First Boston announced early in 2002 it had replaced 20 standalone RISC servers with a BladeFrame system from Egenera that is used to process more than 60 million financial transactions a day. Los Alamos National Laboratory in New Mexico is running a 240-processor cluster made of RLX blades sitting in one rack. (Build your own supercomputer!) These are serious systems. Blade servers are obviously already filling a need in some sectors. Nevertheless, for my money I would wait for a more mature technology. First- mover advantages are often in the perception, not in the bottom line. By 2004, we will see cross-manufacturer blade standards as well as robust management tools. Processing speeds at a reasonable cost will be faster, and storage limitations will be reduced. IDC (http://www.idcresearch.com) predicts a $3.7 billion market by 2006. I predict by that time a lot of us already will be using a rack of blades to serve up our Web content.

Beyond Blades

There is an interesting alternative to a blade server. OmniCluster Technologies (www.omnicluster.com) offers a server on a standard half-length PCI card. Its SlotServer 1000 is equipped with a 300 Mhz X86 compatible processor, an available 20 GB SlotDrive, and up to 512 MB RAM. It draws 10 watts of power from the PCI slot. External interfaces include audio (but why would a server require it?), 10/100 Ethernet, SVGA, and USB 1.0 for your keyboard and mouse. This isnt a business-class machine, but it might be fun to throw in the pack of a PC and run a few Web sites.

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.