The October 2009 edition of “Shop Talk: A Nightmare Before Halloween” took a somewhat cavalier look back at a project my company was involved in that went very wrong. My intent was to amuse as well as illustrate, and in doing so, I was unfair to the client involved. My article prompted a thoughtful reply from the key client-side manager. The response, which makes several good points that are absent from my article, is quoted extensively below.
This Client's Nightmare
In his own words: It all began in the year 2000, when the country struggled with dangling chads and you could keep on your shoes at airport security. Insurers were downright desperate to cast Y2K aside and jump on the Internet. The cost-benefit analysis for system replacement was contained in the word survival. The company I worked for hired one consultant to help guide the RFP process, then another to audit the result. Each endorsed our process and our selection after we executed the advice they provided.
I remember expressing to senior management how meager the options were, but management members, pressed with the knowledge of competitors jumping on the new-development bandwagon, strongly suggested we make a choice. It would be easy to criticize them, but few of us experience the demands of running the company. I admired their willingness to dare, to take a chance. I remember the guidance of our CEO: “This is not a time for evolution but for revolution.”
The project started to unravel within its first year mostly because the software we purchased was not ready for prime time. Not even a revolution would get it to perform unattended renewals or to grasp the user interface complexities of independent agents. But which “new” system–10 years ago–did any of that? I could point blame at the software vendor, which delivered a product that didn't have all the bells and whistles, which pocketed our and venture-capital dollars, which promised missing functionality in a “future release.” As cynical as I am, I don't believe the vendor was deceptive or misleading. It just didn't know what it was getting into any more than me or my employer.
The first year into the project we had dug an expensive grave for ourselves. The memory of my monthly burn rate gives me the chills. When you're in such a hole, isn't it natural to hope for a glimmer of light? To trust what isn't always true? Well, we did all of that. It took us too long to discover the error of our ways, but we did and we learned from it. The years following that debacle brought much automation success to our company. We learned–and practiced–rigorous software search methods, how to plan large projects with more discipline, and how to prevent disaster before we were buried in invoices.
Insurers such as the one I worked for took a chance on new development. They've invested heavily in the emerging software marketplace and in ancillary companies that install new software and rightly rely on the success of software vendors. Without the insurance industry making these bold investments, or even foolish ones, the progress we've seen over the years in the P&C software space would be nonexistent.
In his August 2009 column George writes: “The newer vendors in our space are good software developers that write high-quality, configurable software.” His statement holds true largely due to some insurer paying the freight. I take nothing away from creative vendors that have spent their savings and mortgaged their homes to develop good software. But the investment of insurers in this space, even in failed projects, has not been a “waste.” It takes a certain amount of courage to help blaze a trail–and a few failures to blaze the right one.
Gauging the Risk
To echo one of the key points made above, the vendor world has changed substantially and for the better in recent years. Today, the carrier community enjoys software options for core systems replacement that are functionally and technically superior and are sold by better informed and more partnership-oriented vendors.
However, many carriers remain wary. As noted in December's “Shop Talk,” many carriers have opted for the legacy wrapping option (lipstick on the pig) over the legacy replacement option (rip and replace). Recently, I have had a series of conversations with IT executives who focused precisely on this strategic choice. Those who leaned toward the “lipstick” approach repeated similar arguments that boil down to the following: Rip and replace is too risky/costly/lengthy; most of the business benefit is in a sales front end and in flexible pricing (rating); and we can maintain the legacy system for as long as we need to.
Let's take a look at these arguments:
1. Rip and replace is too risky/costly/lengthy: To say it's “too” risky is to assume there is a less-risky alternative; in this case, to maintain the legacy system indefinitely. Jumping off a cliff into a river may be too risky if you have the option of staying safely on the bluff, but if someone is about to shoot you, the option to jump is no longer too risky, despite the fact the inherent risk of that action has not changed. What has changed is the relative risk.
2. Most of the business benefit is in a sales front end and in flexible pricing (rating): This is true until the carrier needs to change a product or introduce a new one, enter a new jurisdiction, respond to a regulatory requirement, or absorb a newly acquired company. To the extent the argument is true, it robs any future cost-benefit case that may be required to replace the legacy system.
3. Stability/safety of legacy systems: I have had IT executives tell me, “We can keep the legacy system running for as long as we need to.” Really? This reminds me of skating across a frozen lake. In some places, the ice is two-feet thick. In others, it is six-inches thick (or even less). But without drilling ice cores and measuring, you don't know how thick the ice is; it looks the same on the surface. Similarly with legacy systems, the “ice” may be two-feet thick today, but it quickly might become only six-inches thick in the future. Without drilling down into some detail, this may not be obvious. Triggering events that can destabilize a legacy system and dramatically “thin the ice” include:
o Demographics: Most legacy systems are written in COBOL. Build a demographic chart of how many programmers you still have who can maintain these systems and when they will retire, and you have one key measure of when the ice will thin beneath your feet. Bear in mind these key resources may take early retirement. And don't be gulled by the “outsource your maintenance” sirens. Anyone who has tried it knows no third party realistically can become expert in your core legacy systems; they are too complex and badly written. Also, bear in mind there are no newly minted, bright, and motivated COBOL programmers coming out of college.
o Legal or regulatory changes: Regula-tory and legal change is a known but unpredictable constant in P&C insurance. I have made the argument for years we may not know what the change will be or when it will arrive, but change is coming, so the more flexible and configurable the core systems, the better able the carrier is to respond. There are, however, two things we do know about regulatory changes: They are not optional, and they come with a non-negotiable implementation date. Given the potential changes that may be mandated by our activist federal government, who knows what may be on the horizon?
o Business changes: At least business changes, such as new products and new markets, are somewhat predictable and visible through corporate planning and communications channels. Also, implementation dates may be more flexible than those for regulatory changes, but who wants to go to the CEO and tell the head honcho his or her pet project will take an extra year because of the difficulty of changing legacy systems?
o Changes in software support: Many of the application systems we are referring to are “off support” as are some of the underlying layered products such as language compilers. The response from the IT guys is, “But we do all our own application system support, and the compiler has been running for 30 years and is bullet-proof.”
I anticipate when the ice finally breaks and a major failure occurs in one of these legacy environments, it will be because of a combination of triggering events rather than one of the above. However, every “lipstick” IT department should have a calendar of events that projects various demographic, regulatory, business, and software-related developments and how those developments thin the ice and increase the legacy risk profile. It may be in the future it is such a risk profile that justifies the cost and risk of replacement as opposed to the cost and risk of doing nothing.
George Grieve is CEO of CastleBay Consulting. Previously a CIO and still an acting consultant, he has spent much of the past 25 years with property/casualty insurers, assisting them in the search, selection, negotiation, and implementation of mission-critical, core insurance processing systems. He can be reached at 512-329-2619.
The content of “Shop Talk” is the responsibility of the author. Views and opinions are those of the author and do not necessarily represent those of Tech Decisions.
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.