Microsoft released Service Pack 1 for Windows SharePoint Services 3.0 (WSS 3.0) and Microsoft Office server systems in December 2007. At the same time, it published new guidelines for recommended hardware and operating systems to support those products. Those guidelines opened with this warning: "In planning your hardware, remember that this is the last version of SharePoint Products and Technologies that will run on 32-bit operating systems and databases."

A few sentences later Microsoft upped the ante across the board. SQL 2000 is no longer an acceptable database for SharePoint, and Microsoft strongly suggests 64-bit hardware and software be used: "We highly recommend that you install front-end Web servers on 64-bit Office SharePoint Server 2007 on a 64-bit operating system, unless you have a significant business reason not to."

A similar recommendation followed for 64-bit SQL Server. That is a radical change. The product was not even released a year ago. At that time, 32-bit platforms and the old version of SQL Server were just fine. Microsoft must have some very good reasons for effectively making implementation of one of its flagship products more costly. I suspect the driving force behind these recommendations is some performance issues with the product in large installations, but that really doesn't matter. Can similar recommendations for other products be coming soon? And that speculation just serves to reinforce the fundamental issue: Why are so many of us still using 32-bit server operating systems?

Let me make one thing clear from the start. This article is discussing 64-bit hardware based on Intel x86 architecture, and that is referred to as x64. In the Windows/Intel world, we have been stuck with processor-level backward compatibility since the first PCs. That compatibility trail leads back to the original 16-bit 8086 CPU, which was introduced in 1978. Maybe the question we really should be asking is when are we going to break backward compatibility and start with a tabula rasa? But that is another discussion.

Intel, in cooperation with HP, has designed and built another series of 64-bit processors that are referred to as Itanium and are not based on x86 architecture. Itanium-based servers are incredibly scalable, allowing configuration in systems of as many as 512 processors and a full petabyte (1024TB) of RAM. The Itanium architecture is based upon instruction-level parallelism, which allows the processor to execute multiple instructions per clock cycle. Microsoft servers offer some Itanium support (two versions of Server 2003) and will provide much greater support with the release of Server 2008. An Itanium version of SQL Server 2005 is available, which is wonderful, but most applications still are stuck in x86 land, so the benefits of Itanium-based systems are yet to be discovered. That is why we are discussing 64-bit hardware and operating systems based on x86 technology here.

There is one very quick win associated with 64-bit systems–memory. Thirty-two-bit Windows operating systems have a memory address space that is 32 bits long and can thus reference 4.2 billion locations, or about 4GB (Gigabytes–1024 Megabytes). Part of that 4GB is allocated for kernel use (2GB) with the rest available for applications. Applications can have their own 2GB of memory, but they all must share the same 2GB of kernel memory. There are ways to reallocate memory using the /3GB or 4GT switches–effectively giving applications additional memory. That has the effect of reducing kernel memory by 1GB and giving applications /3GB, but now we are sharing half as much kernel memory across all applications. Fine-tuning the available 4GB of memory may provide better performance in certain situations with certain applications but is not a universal solution. Some applications derive no benefit from the /3GB option and others fail to support the /3GB option.

Physical Address Extension (PAE) is a memory extension feature available on certain Intel chips that allows additional physical memory (beyond 4GB) to be mapped and used. Actual access of that memory is dependent on the operating system providing a mechanism to do so. For supported Windows operating systems, that mechanism is known as Address Windowing Extensions (AWE). So, theoretically, an application can have access to more than /3GB of physical memory, but there's a catch. You really can't get at all that memory contiguously. The additional memory is only useful for the sake of caching data pages, not performing actual work.

This can be very useful in that data can be paged in and out of physical memory instead of a disk paging file. That provides both additional security and performance but once again is not a feature that can be used by every application. The 32-bit application itself isn't actually aware of the additional memory beyond 2GB. The 32-bit 4GB memory limitation is real, and while there exist various clever methods that dance around that limitation, none truly circumvent it.

So, let's leave 32-bit systems behind. Instead of an address width of 232, we have a width of 264. And that is a huge difference. A 64-bit operating system running on a 64-bit platform can address a phenomenal amount of physical memory: 16.8 terabytes (perhaps less in the real world). As a practical matter, most 64-bit systems we are likely to work with now will not exceed 32-64GB of physical RAM, but that is a very significant gain from 4GB.

I find it amazing we can discuss gigabytes of RAM. It wasn't all that long ago (early '80s?) I purchased a 16MB stick of RAM for my 486-66 system at a cost of $750. That purchase was on my own dime, so I remember it well. I don't think I even rescued that very expensive piece of silicon when I recycled that box not long ago.

Excuse the aside–64-bit systems will provide major performance improvements in most situations. Consider a Web application. Memory and processor use generally are the only things we are concerned with for most Web applications. Rarely is a Web application disk intensive. Processor time can be a bottleneck, but with 3GHz-plus multicore, multiprocessor machines available, that rarely is the case. The primary reason for poor performance in a Web application is lack of physical memory and the resultant disk thrashing as the OS swaps memory in and out of the page file. Bump that 4GB to 16GB and your performance bottleneck disappears. That is the quick and easy win with 64-bit systems.

Database servers can benefit even more from 64-bit technology. Once again, the larger available memory is the main advantage. With 64-bit environments, there also is the ability to support up to 64 processors. SQL Server originally was written to parallelize work as heavily as possible across physical and virtual processors, and in existing 32-bit setups, it does so very effectively. If you move an existing SQL Server workload from a 32-bit, four-way system to, say, a 64-bit, 16-way system, the work will not only be spread out across more individual CPUs but also will be handled all the more efficiently. Microsoft expects 64 bits finally will start to quiet the "It's not Oracle" naysayers who always have surrounded SQL Server.

The use of virtual servers has increased dramatically. Well over half of the work I do (on Windows Servers) is in a virtual environment. Most of those VMs have been 32 bit–so far–but that is changing. The VMware ESX Server 3i provides 64-bit operating system support. Microsoft is touting the "unlimited virtualization with Windows Server Datacenter edition" running on 2-64 processor machines. The additional memory available in a 64-bit virtual machine server probably can make the decision to use virtualization even easier. Lack of available memory always has been the biggest disadvantage of virtualization. Sixty-four-bit technology can make that limitation disappear.

Let's go back to the opening paragraph and the Microsoft recommendations. It really is pretty simple. Microsoft has made a decision 64 bit is going to be the way of the future. And it doesn't matter how long we extol the advantages or disadvantages of life in that future. There are better, simpler, and more secure platforms available than x64 systems. But that is the price we have to pay if we are going to continue to use the most widely accepted suite of server and productivity software available.

There really isn't a Microsoft-Intel conspiracy to create and sell machines and software that perform less than optimally. Microsoft was faced with an early success with Windows 3.1 and the 16-bit versions of Office. They became entrenched in corporate America before the products were truly ready for prime time. Once that happened, Microsoft had no choice but to develop new products and operating systems that would work on existing hardware. Intel likewise was forced to create new processors based on an old and antiquated instruction set. If the PC revolution could have been delayed by six to 18 months, we might have been able to start with a better baseline than 8086 technology. The instant success of the IBM PC and all its clones doomed us to an address space that was perfectly adequate for DOS. In fact, Lotus 1-2-3 running on DOS (with dual 5?-inch floppies) provided the first compelling reason to purchase a PC. Whether Bill Gates really said nobody ever will need more than 640KB of RAM is irrelevant. Once the product base was out there, our fate was sealed.

So x64 platforms are the next logical progression. I would prefer to make the quantum leap to something like an Itanium system. Machine-level parallel execution is seriously cool. It might even convince me to go back coding in assembler. Bit twitching was what made programming fun in the first place. The problem is I live in the real world and I get paid to build software solutions on systems that exist in a universe primarily built on 32-bit x86 systems.

I have not seen great success with 64-bit operating systems for desktop use. I loaded the 64-bit version of Vista on my 64-bit AMD laptop (2GB of RAM) and decided that until I was willing to pay the price for another 2GB of memory, I was better off running the 32-bit version. I have heard similar reports from colleagues. Desktop (meaning machines I can't use on an airplane) memory is fairly inexpensive, so I imagine 64-bit operating systems will become common, particularly when more organizations make the move to Vista.

Unfortunately, many IT and other professionals must have portable machines. We already are paying a premium for laptop computers. Once the additional cost of expensive laptop memory upgrades is added, the benefits of a "better" operating system may disappear into the total cost of ownership equation–which leads me to suspect while x64 is going to be a necessity in the data center, we will continue to see a mix of machines and operating systems for productivity machines. In fact, I am seeing a lot of creative solutions such as 64-bit Linux laptops running Windows in a VM to do "Windows stuff." Solutions such as that allow hard-core geeks to express their "freedom" while still toeing the corporate line in data center. Who knows? We actually may break free from x86 technologies some day. Or not.

Please address comments, complaints, and suggestions to the author at [email protected].

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.