Virtualization and virtual machines have been getting a lot of attention lately. This is mainly due to the increased availability and reliability of commercial virtual machine software from VMware and Microsoft. There is nothing new under the sun, though. Gerald J. Popek and Robert P. Goldberg's seminal 1974 paper, "Formal Requirements for Virtualizable Third Generation Architecture," pretty much defined the playing field for virtual machines, even though the field they were playing on was big iron. We may be heading down the road of esoterica today–my spell checker doesn't even recognize the word virtualizable.

So, What Is Reality?

Just what do we mean when we use the term virtual? It generally is applied to something that is not conceived of or perceived as real but yet acts like a real thing. Then what is reality? Ren? Descartes' greatest contribution to Western culture was his work in mathematics, but he still is best known for his contention that his reality was defined by his ability to think or reason–cogito ergo sum. His ideas provided the basis of Western rational philosophy, which in turn was opposed by empiricism. The Irish philosopher Bishop George Berkeley took the idea to its extreme by asserting the only real existence of anything is the perception we have of that thing in our mind. Thus, nothing is real beyond its perception–and you know what that means for trees falling in the forest. Dr. Samuel Johnson, the 18th century poet, essayist, and biographer, was particularly frustrated with Bishop Berkeley's theories and is reputed to have kicked a heavy stone, saying, "Thus, I refute him."

Virtual Memory

Reality (as opposed to virtuality) has taken on new meanings in the world of digital computing and still is as difficult to define as ever. Virtual memory is something we all are familiar with. A digital computer operating system uses a persistent storage device (such as a hard drive) to expand the volatile memory it has at its command. Random access memory, or RAM, is the working area–the area where data is manipulated and processed. When we use up all available RAM, we can move portions of data to a persistent storage device and virtually expand main memory. This whole process also allows us to create multi-processing machines. Entire running processes can be swapped in and out of RAM allowing us to time share the central processor. Ideally, we do all the swapping in RAM, thereby eliminating the latency of hard-disk writes and reads.

Virtual Application Processes

Sun Microsystems coined the phrase "write once, run anywhere" to describe the portability of Java bytecode–or, more generally, code written using the Java programming language. By portability we mean the ability to run on a wide variety of machines using diverse and often incompatible operating systems. That portability dictates the Java code cannot be compiled before distribution, or it would not run on diverse systems. That little problem was solved by distributing Java Virtual Machines, which interpret the code at run time or by compiling it at run time using a Just-in-Time (JIT) compiler. A Java Virtual Machine is computer software that creates an isolated environment in which the Java bytecode can be interpreted and then translates that into processes that can be accomplished by the host operating system. Sun "only" needs to distribute Java Virtual Machines for all major operating systems and then deliver on the "write once, run anywhere" claim.

Of course, we all know this got a little confusing when Microsoft decided to distribute its own Java VM–which broke a lot of Sun stuff. As a result, we learned to code the Microsoft Java VM. Then the justice system made us return to the Sun Java VM, which broke all our new code. So much for standards.

Virtual Machines

Our present interest in virtual computing really does not jibe with what we have been discussing so far. VMware's software and Microsoft's Virtual PC are products that allow us to run entire computer systems within the context of another computer system. Just what is it that makes a digital computer a digital computer? Consider these three things or layers:

1. Hardware–the CPU and associated input, output, and storage devices.

2. The operating system–this is a stretch, but let's call an operating system computer software that provides a link between the hardware and productivity software.

3. Application software–the reason we are using computers in the first place, i.e., software that can function only within the operating system.

For our purposes right now, an example of a "real" machine would be an Intel P4 box running Windows XP and whatever productivity applications we want to assume. Virtual machine software such as Virtual PC runs in our layer three above. In fact, it can be considered as another application–one that emulates a separately running operating system. This virtual machine is hosted in a real OS, and that real OS is the software that actually is interacting with the hardware. On that fact revolves both the advantages and disadvantages of hosted virtual machines.

The Good

Let's assume you need to distribute policy management software to thousands of independent agents. Helpdesk support for this group already is a nightmare. Individual handholding for each office is out of the question. What if you could distribute your software not just as an application but as a fully configured application running in its own virtual machine? Your users would need to install only the OS-specific VM and configured application.

A few years ago I spent a couple of days in the data center of a major insurance carrier installing an application we had licensed to it. Installation of the actual software was a snap, but there were specific configurations that had to be accomplished manually after installation. That process took hours because I was not permitted to touch a keyboard. The whole project would have been that much easier had I been able to install a virtual machine with the software installed and preconfigured.

The Bad

Here is another scenario. You still are going to distribute policy management software to thousands of independent agents. You know your customers are running operating systems ranging from Windows 98 to XP Professional. So, your QA team decides it is going to test on hosted VMs. Team members set up multiple virtual test environments with different operating systems and different configurations (available RAM, processor speed, etc.). They intend to load up a virtual machine and then install, run, and load test the product in that VM. If it passes, it subsequently is released for that operating system. That doesn't sound like such a bad idea, but I don't like it. A hosted virtual machine is just that–an operating system running within the context of another operating system. If my OS actually isn't touching the hardware, I have no guarantees a "real" system running that OS is going to act in exactly the same way.

The Ugly

Running virtual machines in a hosted environment is a kludgy process. Operating systems already are bloated–and manufacturers try to force such quantities of proprietary stuff on us that system resources already are pushed to the limits. My very nice laptop computer with a well-known three-letter-acronym nameplate came from the factory with so much preloaded "productivity" software I was considering installing a DOS VM in order to run Word Perfect 5.1 and get some work done. The registry entry for HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionRun had a list of keys so long I needed to scroll the page. No way am I going to be able to run a hosted VM on this box. But even clean, well-maintained machines bog down when running VMs. We have a demo box running Windows 2K3 server running SharePoint Server, SQL Server, and Content Management Server in a VM on XP, which accesses the data served up by the hosted process.

Hardware-Level VMs

If we really want to create efficient virtual machines, we need to drop down to the machine level. Let's face it: Once we get too far beyond the processor's basic instruction set we are virtualized anyway. The heart of the machine is just a series of NAND and NOR gates, assuming the smallest possible instruction set. By the time we get to pushing and popping data off the stack, we are out of binary switch world and already virtualized. Still, the reality is an operating system will expect certain behavior from the hardware.

Systems that permit hardware-level virtualization often are known as hypervisors, which allow different operating systems to be run on the same machine at the same time. They emerged in the 1970s, most particularly with IBM 360 systems that were designed and built specifically to enable virtualization. I cut my teeth on Virtual Machine/Conversational Monitor System (VM/CMS) machines, and I can attest to their versatility. On the other hand, X86 systems were not designed for virtualization. That makes them rather difficult to virtualize fully. One way around this is called paravirtualization, a scheme whereby a relatively simple hypervisor, which delivers a reduced virtualized hardware layer, is matched up with an operating system specifically designed to operate with that reduced layer. The reduced virtualization generally is based upon a more trusted security level or protection ring afforded by the actual processor. In essence, paravirtualization provides a machine-level API that a modified OS must use.

Another scheme that falls short of full virtualization is called native virtualization. This is a hardware virtual machine that virtualizes only enough of the hardware to allow an unmodified operating system to run on the VM. The limitation here is the operating system must be designed to run on the underlying hardware (CPU). BootCamp for the Macintosh provides the ability to install and run Windows XP on an Intel-based Macintosh. I have not personally tested this product, but the beta appears to provide some layer of hardware virtualization.

Our ultimate goal should be full hardware virtualization. Virtual machines that will allow any operating system to run on any hardware platform will provide the greatest flexibility. Full virtualization will create another paradigm shift. Right now, if I need to recreate a failed server or bring another online in a farm, I need to restore an image of the machine and then use my latest backup to recreate the most recent state of the machine. If I need to accomplish this with a totally different machine, I may need to rebuild the server starting with OS and then layer on all the various application software.

The Beautiful

If I face the same problem but have virtualized all my machines, I can bring another server online in a heartbeat. In an emergency, I could drop my virtualized middleware server on the same physical machine running my presentation software and minimize downtime–a performance hit always is preferable to downtime. I kind of like the picture: I have a 6U rack of blade servers all running hardware-level virtual machines. These guys are accessing my terabyte SAN. I can swap servers in and out of actual physical machines as needs and loads dictate. The only "machine" I need to concern myself with is a virtual one. And if I want to play Dr. Johnson, I can go kick my 200-pound Netfinity server that died last week.

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.