Ensuring data accuracy to avoid costly errors and delays

Proper software testing avoids delays and errors. It also allows firms to harness the power of big data in a cost-effective manner.

If new software doesn’t communicate with an insurance carrier’s legacy system properly, the company can face delays and data errors that head downstream to its agents, brokers and insureds. (Adobe Stock)

In today’s world, the value of data continues to grow throughout the insurance industry. Those that have the resources to make investments in advanced enterprise or policy administration systems increase their ability to use modern analytics, share data seamlessly across multiple systems and help ensure data accuracy.

While larger insurers usually have these vast resources, many small-to-mid-sized carriers, program administrators and managing general agents (MGAs) do not; instead, they stick with older “legacy” systems. These limit their ability to leverage the power of new technologies and make them vulnerable to data accuracy risks, especially when they introduce new products, upgrade software or make other system changes.

In short, if new software doesn’t communicate with their legacy system properly, insurers can face delays and data errors that head downstream to their agents, brokers and insureds. And those unseen errors can snowball.

Take, for example, claims processing. It’s the hallmark of customer service for any size carrier or MGA. So what happens when a rating is incorrect and testing doesn’t identify the error? It doesn’t matter if the erroneous rate was entered manually or resulted from a software upgrade to another part of your system. What matters is that it exists. The insured then might receive a policy with a rate that’s too low, and once that happens, insurers may be faced with a hard choice: take on the financial burden of the error or face the consequences of regulatory penalties, which could add up quickly with additional reputational risk.

System replacement alternatives

To make the most of their data — and ensure data accuracy at every step of the process — carriers and MGAs don’t necessarily need a complete (and expensive) overhaul of their legacy systems. There are software and automation tools that can streamline business processes and help systems talk to each other and stream. But they do need two types of testing support to make sure upgrades and other system changes do not introduce errors.

The first, user acceptance testing (UAT), should be conducted any time a firm rolls out a new product, upgrades its software or makes other system changes. When done correctly, user acceptance testing will verify the accuracy of all policy, rate and claims data.

But testing just that new software, policy or form change isn’t enough. When developers make a system change in one place, they run the risk of breaking something else in another place. A second type of testing — regression testing — tests the entire system and confirms those adjustments haven’t impacted other programs inadvertently. And it can be automated to continually ensure accuracy.

Proper regression testing doesn’t just catch manual errors. It also helps to perform data scrubbing. That’s the process used to make sure, for example, a home insured for $290,000 is really worth that much or that an insured claiming to have no prior losses truly never filed a claim. While some of the newest data analytics systems include calculators and sophisticated data-scrubbing technologies to ensure data accuracy, these are often out of reach for small-to-midsize carriers and MGAs. This makes regression testing even more important.

Avoiding testing shortcuts

For the most part, small-to-midsize carriers and MGAs know the importance of testing. However, because they have smaller staffs and face time and cost pressures, they sometimes take one of three shortcuts that lead to inaccurate testing.

One is that they assume their software vendor is providing some level of testing. And while a vendor may provide user acceptance testing, it’s unlikely that it can cross-check rates, forms, policies, reporting and claims decisions — the most critical parts of the business — leaving a firm vulnerable to data errors.

Second, they might decide to test in-house. But that too brings limitations. Few small-to-midsize insurers have large IT staffs. So they may ask an underwriter to help with testing. That underwriter will know what to look for from a business end but lack the nuances of programming needed to see if a change on one system affects another system. This is especially true for MGAs, who maintain multiple systems based on the lines of business or carriers they work with.

Third, they might hire a third party to conduct testing. But problems arise if they use a vendor that lacks domain knowledge. In this case, it’s an understanding of the P&C insurance industry and without it, they may not be able to spot some errors or perform accurate cross-checks.

When a third-party testing firm is doing the testing, insurers should:

Data is currency, and access to highly accurate, usable data gives carriers and MGAs the power to deliver outstanding customer service. Proper software testing not only provides peace of mind but also allows firms to harness the power of big data and keep up with ever-changing technology in a cost-effective manner.

Doug Vatter (dougv@westpointuw.com) is the president of West Point Insurance Services.

Keep reading: