I recently received an e-mail that contained a survey, which asked something like: Which of the following is going to be the hot topic for IT in 2007? The choices given were: open-source software; server virtualization; Vista; VoIP; wireless. Two things immediately struck me. First, the iPhone didn't make the list, and second, all of the suggested hot topics for 2007 are old; in fact, some of them really are getting very long in the tooth. At least the iPhone could have led to amusing anecdotes concerning Steve Jobs.
I was at a software conference a few years ago, and Jobs was the keynote speaker. As Jobs walked out to the podium dressed in his predictable jeans and black turtleneck, the crowd gasped. The young woman seated next to me exclaimed, "I can't believe it's really him!" I was totally unprepared for the adoration bestowed on Jobs by his fans–after all, wasn't Wozniak the smart one? Evidently not. Jobs was right up there with Elvis (during his heyday), which makes me understand why the popular press makes such a big fuss over–gasp–iTunes on an iPhone . . . and the technical intelligentsia yawn. Yes, there is a reason Apple dropped the word Computer from its name. And that brings me back to our questionnaire above, and the only item in the list even remotely interesting and timely: server virtualization.
Virtual server machines in the context of server operating systems that are running on actual physical devices are showing up more and more in the data center. I recently was working with a client who just had received delivery of four new servers. Two of them were virtual, and two of them were "real." How your organization deploys and uses virtual servers will have a direct impact on how efficiently you are able to use existing resources, and that will reflect itself directly on the bottom line.
Virtual machines are not new. IBM led the way by creating VM operating systems that made it possible for organizations to justify the enormous cost of mainframes by burning up all those CPU and memory cycles with virtual machines that effectively time sliced the system. A mainframe virtual machine emulates another computer. The new virtual server machines emulate the server they are running on.
In all fairness, we need to point out there is nothing less real about a virtual server–it just has one more layer of abstraction between it and the machine. A modern server operating system is so far removed from interacting directly with the processors that the additional layer of abstraction provided by the virtual machine is insignificant–at least it is theoretically insignificant. In the PC server world, we have become accustomed to installing operating systems–that is, server operating systems on physical machines. We tend to think of that symbiotic physical server/operating system partnership as a "real machine." For the sake of accuracy and simplicity, I will refer to that traditional setup hardware/software configuration as a physical server. I will refer to virtual server software installed on a traditional server as a virtual server.
There are scenarios in which virtual servers are preferable to real servers. And there also are scenarios in which physical servers are the better choice. Knowing the difference is the hard part.
I was involved in a project recently in which I needed to evaluate a number of different software configurations. The available literature and technical libraries provided no clear best-case method. We needed to test different database configurations as well as certain specific customizations to the software. It all needed to be done in a multi-server environment. In the days before virtual servers, we would have installed and configured the operating system on three or more physical servers and then imaged each box. We next would have created the configuration we wanted to test, performed our testing, and then created new images of the boxes with the test configurations. Finally, the physical servers would have been restored to their original condition, and a second set of configurations would have been installed and tested.
Since we were using virtual server software, we were able to create a single, clean-server, virtual machine. We then proceeded to configure three copies of the base server with a set of configurations. While testing was progressing on that set of virtual servers, a new configuration was set up using three new copies of the base installation. Each virtual server was saved in a single file that was less than 10 GB in size. Those files, each containing a single virtual server, could be loaded on a single physical machine as needed, modified in place, and tested. Modifying and testing changes on a virtual server require far less effort and resource allocation than testing on a physical server. Instead of tying up three physical machines for testing, we were able to create all possible configurations on a single machine.
Not only that, but we could load quickly and effortlessly each configuration simultaneously or sequentially for demonstration purposes. As I presented the final solution and recommendations, I was able to do live demonstrations to support my conclusions. To accomplish the same thing with physical servers would have required a couple of racks of servers or a few days to reload each configuration. Using virtual servers, I was able to compact a month's worth of testing and work into about 10 days.
Of course, I was not creating a production environment. I was creating virtual servers all running on a single physical server where each virtual server had access to a certain allocation of system resources. In this case, I sacrificed performance for ease of deployment and conservation of available resources. However, if we had desired, any one of those virtual machines could have been loaded onto its own physical server and been allocated the lion's share of available resources. Running a single virtual machine on a high-performance physical machine will not provide quite the performance of a stand-alone physical server, but it isn't far off.
Virtual servers are very useful for testing architectural and software configurations, but unless you are going to deploy virtual machines in production, they have limited use in true QA testing. In the example I have been discussing, we then recreated our chosen setup on physical servers to do load and user testing. The good news was we knew our code was good. We knew the application would work. We thus were able to limit additional testing to those things we didn't know. In this case, the unknown was largely a matter of scale and load.
Now, the story takes a new twist. After the software was ready to go live, we were told we would be required to test using virtual servers once again. This time, though, we would not be controlling or creating the virtual servers. All production servers are located in a data center to which we had no access. We were given two physical servers and two virtual servers and told to build a production system using at least one of the virtual servers. I made the decision to use one of the virtual servers to replace the physical server that was acting as the database server for the application. All I received were admin log-in credentials to the virtual server. I had no knowledge of or access to the physical server.
This is the way we can expect to see virtual servers used in real life. As a way of conserving resources, we will see an increasing use of virtual servers. IT hardware managers will attempt to load as many virtual servers on a single physical server as they can without causing obvious performance issues. And there is a perfectly valid case for doing this. It has been my experience most application or Web servers operate way below capacity–that is, in terms of CPU usage. I rarely see continuous CPU use above 20 percent. But I do see most boxes maxing out on memory. Application servers routinely cruise along using 80 percent or more of allocated memory. Virtual servers are allocated only a certain amount of RAM. After that, they are dependent upon slow and inefficient swap files (or virtual memory!). And the underlying operating systems themselves need to utilize a certain part of that RAM. Therein lies the Achilles' heel for virtual machines of all kinds. Processor time almost always is available. Quad processor dual core machines are fairly standard and probably provide enough CPU cycles for three or four virtual and real machines. But divide 8 gigabytes of RAM four ways, and you become aware of the limitations of sharing RAM. Virtual memory is hard-drive memory, and hard drives are slow.
Let's return one more time to my real-life story. The virtual server I was to use as a database server was a virtual single 3.0 processor single core 1024 MB setup, at least that was what Windows 2003 Standard Edition reported. I suspect it actually was loaded on a dual core quad 4 GB physical server. We loaded up the database, and even though we were allocated far less RAM than we needed, initial tests were satisfactory.
Before we could begin load testing, however, the data drive began to show bad sectors, and the database failed completely. We informed the IT server managers of our problem. Their solution was a high-level format of the drive. We repeated this cycle two more times and finally insisted there must be an actual problem with the actual physical hard drive. IT then told us other virtual servers were running fine on the physical server, so in their opinion, our application must be at fault. They refused to replace the possibly faulty hard drive because that would necessitate taking down the virtual servers that were not experiencing problems with that drive. Pretty scary. In an effort to repurpose and reuse hardware, we now were in a situation where, in my opinion, we had multiple applications running on multiple virtual servers that could not be trusted. There was some underlying issue causing our virtual hard drive, which was, of course, really a physical hard drive, to report failure. The ultimate solution was to attach another drive to our virtual server while leaving the other guys on the data drive that was failing for us. I wonder how long they have before they start getting some bad sectors.
Instead of running multiple applications on a single physical server, it makes perfect sense to create several virtual servers that each runs a single application. If one application comes down ungracefully, it will not affect other applications. If an application is able to run concurrently with other applications on a physical server, it probably can be expected to run efficiently in a virtual server on a shared physical server. What doesn't make sense is to create virtual servers to replace the original multiapplication physical server and then load several of those virtual servers on the same physical machine. That almost certainly will cause performance issues across the board.
Virtual servers provide us with the ability to create portable, easily loaded, and easily backed-up software systems that quickly can be deployed and redeployed on demand. I hope these are the reasons for using virtual servers. If our motivation is only to maximize hardware use by dumping as many virtual machines on each physical machine as we can, then server virtualization indeed will be a hot topic in 2007–but for the wrong reasons.
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.