Service-oriented architecture (SOA), Web-oriented architecture (WOA), Web services, Web application programming interfaces (APIs)–why are we so concerned with these things? Why do we keep defining new ways to describe interoperability between disparate and like computing systems? Why do we need to create standards bodies such as the W3S or ACORD? The simple answer is because we need to ensure systems can interact in a reasonable and predictable manner. But the underlying reason they don't play together nicely is information systems have become too complex. That complexity seems to be increasing exponentially and soon may reach a point where only very large organizations with very large IT budgets and substantial staffs can even hope to cope with that complexity.

Complexity has increased at the application level, administration level, infrastructure level, and programming level. As business applications become more feature rich, they also become more difficult to use. Windows Vista and Office 2007 have frustrated and disappointed thousands of users. Many organizations have yet to upgrade to these–now no longer new–software products.

Applications that should be based on standards (such as Web browsers) refuse to adopt standards in the same way. The World Wide Web Consortium (W3C) publishes standards every vendor should use for Web interactions. Why is it then when I design a Web application I know in advance it will exhibit different behavior in Internet Explorer (IE) 6 and IE 7 and Firefox (which doesn't seem to change a lot between versions)?

I recently discovered Safari installed on my personal Vista machine–apparently installed because I wasn't paying attention when I upgraded my iTunes software. I really don't want to start supporting a fourth or fifth Web browser. Applications designed to interact with each other often interact in unexpected and anomalous ways. And unexpected behavior is the quickest and easiest route I know to losing end-user acceptance of any new product.

It's All About Productivity

Software applications whose intended purpose is to increase business productivity should do that. Green screens were (and probably still are) adequate for data entry and data retrieval. That does not mean we should return to a world of dumb terminals or terminal emulators, but it does mean we should not make the process more complex or difficult.

I am pounding this article out on the latest and greatest office productivity suite available, but I am not using any features I didn't have available in Word Perfect 5.1 running on a 486 DOS box.

Then when I am finished, I need to save it back to a previous version, so my editor can read it. Don't get me wrong. I love all the features available to me with this office suite of software, but I doubt I really need (or will ever use) all those bells and whistles.

Software Administration

Administration of software systems also has increased in difficulty. With more services available from a messaging system or collaboration system or policy system, the administrator of that system has greater responsibilities. Even when the intricacies of a particular system are learned and understood, the daunting task of integrating with everything else can be overwhelming. I have seen organizations with 500 users and an IT staff of a half-dozen individuals, and these staff members need to maintain all the desktops; manage all the infrastructure; support, manage, and maintain multiple LOB systems; and manage multiple database systems, all while they are tasked with bringing new software and systems online. It simply is becoming too much. Nothing gets done correctly because the people managing multiple systems probably aren't even certified in them. There never is enough time to provide adequate oversight of existing systems. No small wonder running Software as a Service and hosted systems are getting so much attention these days.

Coding to Complex Systems

When I was serious about writing code (back in the C/C++ days), it was possible to do almost anything with a system if you understood the basics of processor technology and the operating system you were working on. If a published API for an application or operating system didn't provide the functionality you needed, it was easy enough to drop down to the system level and do what you needed to do.

Web services has changed all that. APIs don't always work the way you expect them to. Programmers spend half their time doing Google searches and reading blogs determining how "really" to interact with an API. Even then, a lot of it is just hit and miss. Complex systems throw so many variables into the mix that software-manufactured code samples are virtually meaningless in the real world.

I love to hook up a vendor demo application to its demo database and watch the magic. Then you try to implement that same magic in the real world, and you discover how many shortcuts the vendor used in its demo. I recently was configuring an application to perform some interaction with a customer's LDAP system. Even though I was following the vendor instructions word for word, the interaction was not happening. After a few hours of soul (and Web) searching, I discovered an undocumented trick (or missing step) that caused the application to start working correctly. Now, I wonder whether my last trick was really necessary, or was it only necessary because of some other problem with the systems I was working on? I wonder whether this was a step that just was left out of the vendor's instructions because the "missing step" never is missing in the vendor's test environments. More than likely, we never will know the answer. Far too often we get a system working, and while we think we have documented everything, we simply accept the fact it is working and leave it at that.

Standards

I was involved in a very successful project that included multiple software vendors providing parts of a complex solution. The system was designed using what appeared to be best practices for a service-oriented architecture that used Web services for interoperability. Problems arose when every vendor's interpretation of how it would expose and consume Web services was different.

The project involved integrating fat-client agent systems with Web-enabled agency and rating systems as well as integration with iSeries back-end policy administration systems. There were at least two components of the system that essentially were black-box systems.

One system was designed to input ABC and output XYZ, but we had no control over how the output was delivered. The vendor apparently was not concerned at all with SOA. If we needed an HTTP post for a particular payload, we had to create that post outside of the system.

Another subsystem also was black-box-like. It did what it was supposed to do, but configuring it to do so was a complex and daunting task that could be accomplished only by an individual who had deep experience with that subsystem. Black boxes do not make good software design. During the development process, it never was readily apparent exactly where a breakdown was occurring even though we had extensive logging built into the environment.

The complexity of the system and the lack of a common method of interoperability made troubleshooting difficult. This was further compounded by our lack of ability to see into the black-box systems. Additionally, any change in the environment from firewall settings to DNS configurations to LDAP changes often had unpredicted results.

Ideally, we would have mapped out a full integration scheme based only upon net service endpoints. Each vendor then would have been responsible for working only to those endpoints. Unfortunately, we were working in a real world (as opposed to a perfect SOA world) environment.

SOA is a powerful methodology, but like all systems, it is only as strong as its weakest link. If not everyone is going to play in an SOA environment, everyone else is going to suffer and needs to pick up the slack. In this same project, we were dealing with XML payloads that were based on ACORD XML but were not exactly ACORD. That meant someone had to spend a significant effort to map the XML to "real" data. That effort did not put the project at risk, but it raises the question of why we even have standards if we aren't going to use them the way they are intended.

The Rest of the Story

The real problem with complex systems is we have only an illusion of control. Earlier I discussed a little software glitch I "solved" by changing a setting in the system (it actually was a system account permission I changed). That particular fix was on a 64-bit Intel system on a physical server. I do not ever remember needing to set that same permission level before. Yet I am not 100 percent certain I was working in an identical environment. Maybe the last time I implemented that feature was in a 32-bit Virtual Machine running on a 64-bit physical server (or a multitude of other possibilities). In fact, I have no idea why it now works.

Which brings me to the root of the whole thing. Complex systems can exhibit behavior that is not obvious from analysis or understanding of the behavior of the component subsystems. There is a field of science that studies complex systems. Generally that science is concerned with systems such as climate or economies or environments. Complex systems often behave in a nonlinear fashion. There is not necessarily a direct progression from A to B to C. Nonlinear behavior definitely is not a desirable property of an information technology system. In fact, consistently demonstrable nonlinear behavior should be grounds for rejection of a business software system. On the other hand, it might provide the basis for creating real artificial intelligence. Nonlinear thinking is one hallmark of intelligence.

Do You Know Where Your Bits Have Been?

I have seen too many instances where software behaves in unexpected ways, throwing unexpected errors, and producing unpredicted results. You can say this is just the result of poorly designed systems and badly written code. I think it is more likely the result of systems with millions of lines of interacting code from the processor to the operating system to the application to the I/O devices to the transport layer and so on. Not all of that code was written as a cohesive whole, nor was it all meant specifically to interact in a consistent way. How many "fixes" are there in an operating system that appear to make it work correctly but may be nothing more than a luck kludge? Today, we are dealing with extremely complex computing systems with the promise they will get even more complex. As that complexity increases, the reasonable expectation of predictable results decreases. It may be about 0s and 1s, but unless you can follow that train all the way back to its origin, you never can have 100 percent confidence in the result. Should this be a cause for alarm? Probably not right now. Systems likely are predictable 99.999 percent of the time. It is that other 1/1000 of a percent that bothers me. TD

Please address comments, complaints, and suggestions to the author at [email protected].

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.