What makes a server a server?

Anandtech is running an article series about servers and what makes a computer a server. The first installment was posted today. In their words, the difference is:
Basically, a server is different on the following points:
  • Hardware optimized for concurrent access
  • Professional upgrade slots such as PCI-X
  • RAS features
  • Chassis format
  • Remote management
One of the things that makes my life a LOT simpler is the remote-access-card in our servers. We use HP for ours, so it's called an Integrated Lights-Out card, or iLO. That card is the difference between having to drive in (a 20-25 minute drive) and fixing something from my living room at home. The iLO gives me access to a power button from home, which is the most useful thing it does. Second to that, I have the ability to see what the server is presenting on the screen like a KVM; quite useful for Abend screens. I don't know if they even make things like that for 'home brew' servers.

RAS isn't Remote Access Service, it is an acronym for Reliability, Availability, serviceability. Servers almost always have redundant power supplies, where desktop hardware doesn't. Servers almost always have hot-swappable hard-drives, where desktop hardware doesn't. Some servers (especially with NetWare installed) even have hot-swappable CPU's. These sorts of features are slowly creeping into the desktop realm, but some just don't make cost-effective sense. With RAID becoming more an more prevalent in the home, hot-swappable drives in the home are going to become more common. But hot-swap power-supplies? Probably not.

Chassis format is another key area. When you scale past a certain spot, racking your servers makes more and more sense. Once you get there, your options for whiteboxing your way to IT Glory become much less available. Engineering a 1U server takes work, since that form-factor is very prone to over-heating in the modern high energy environment. If you want to go blades... you'll have to go OEM instead of build your own.

The article concludes with a discussion on Blades and their place in the server market. And they make some really good observations. Blades are useful if you really do need 24 mostly identical servers. We have a blade rack with 24 servers, and that rack was filled within 12 months of its arrival. What filled it were the servers that were moving onto, "they want HOW much?" maintenance plans, and buying new was deemed the way to the future rather than keeping maintenance on the old stuff. We won't face that situation for another two years when another cluster of servers will hit that exalted 40%-of-purchase-price-a-year maintenance level. So our next series of server replacements will probably be a bunch of 1U and 2U servers rather than another blade rack.

But when the blade-rack hits the high-maint-cost level, we'll probably replace it with another blade system. By then, perhaps, the OEM's will have figured out how to future-proof their blade infrastructure so we can, maybe, have one set of blade chassis survive two generations of servers. Right now, that's not the case.