In search of insurance data quality

In the world of insurance, data is king. Here are six steps and best practices for high-quality data management.

Managing data quality is critical for insurance companies.  (Dmitry/Adobe Stock)

The accuracy and completeness of insurance data can make all the difference between a successful business and a disastrous one. As such, data quality management is critical to any insurance company’s operations.

Insurance companies deal with large amounts of data daily. High data quality is critical for several insurance processes, including the following:

Underwriting: Insurance underwriters rely on accurate data to assess the risk of insuring a particular individual or business. The accuracy of this data can affect the premiums charged, the policy terms, and the overall profitability of the insurance company.

Unfortunately, most insurers struggle to attain high degrees of accurate underwriting data. At times, agents and agencies provide inaccurate information about a risk being priced. This is especially problematic on smaller policies when taking the time to “dig deep” for accurate data isn’t always seen as worthwhile.

Insurers have many means of validating submitted policy data through third-party data sources today, but this adds expense and time, and the third-party data isn’t always reliable. Carriers relying on independent agents and brokers must balance their desire for accurate data against the “hassle factor” it creates for the agency or broker, especially when time is of the essence in providing a quote, as they have other carriers they write business with.

Claims Processing: Claims adjusters need to verify the accuracy of data provided by claimants and policyholders to ensure that claims are legitimate. Accurate data helps speed up the claims process, prevent fraud, and reduce claims costs. Accurate data is also the foundation required for claims automation.

Automation is the “Holy Grail” of innovation that most insurers are pursuing, but few have succeeded in implementing. As in underwriting, the parties that provide data about a claim are not always driven by a standard of completeness and accuracy. Data received from multiple different sources may not always be in sync, and adjusters, often dealing with competing priorities and high caseloads, may focus more on the claim outcome than ongoing data quality.

Claims automation uses structured data from the claim system and unstructured data extracted from medical bills, treatment summaries, claim notes, and other data to determine the proper path for each claim. It also reevaluates its last decision every time new information is added to the claim. Doing this with data that isn’t clean and accurate is a recipe for disaster.

Predictive Analytics: Today, every insurer uses some form of predictive analytics in its operations. Underwriting and claims are the two most prevalent areas deploying predictive analytics, but sales, process management, legal and actuarial are increasingly using it. The artificial intelligence (AI) behind predictive analytics leverages historical and current data to bring automated insights into insurance processes and those running them. The adage “garbage in, garbage out” is most valid in predictive analytics. Effective models can only be developed with quality data. If data quality is poor, models will have significant error ranges and bias built into them. These outcomes hinder performance and make it hard for practitioners to rely on them. The “lift” AI promises can be illusionary without quality data.

Compliance: Insurance companies must comply with various regulatory requirements, including data privacy regulations. High-quality data helps insurers meet these requirements and avoid potential fines and legal penalties.

Customer Service: High-quality data helps insurers provide better customer service. Customer service representatives can access accurate and complete data about a customer’s policy, claims and other relevant information, making it easier to respond promptly and appropriately to customer inquiries.

So, what does it take to create and manage high-quality data/? Here are the steps and best practices required to manage data quality effectively:

Step 1: Define data quality standards

The first step in managing data quality is defining the standards for high-quality data. Insurance companies must establish guidelines addressing data accuracy, completeness, consistency, timeliness and relevance. These guidelines should be clearly defined and communicated to all employees involved with data.

Step 2: Data collection and entry processes

The next step in managing data quality is to ensure that data is collected and entered accurately. Insurance companies use various methods to collect data, including online forms, mobile apps, and paper forms. It’s essential to ensure that the data is entered correctly the first time to avoid costly errors down the line. Systems need to prevent data entry employees from skipping items or entering the same value for a field repeatedly.

Step 3: Data validation

Insurance companies use data validation tools to check for errors and inconsistencies after the data has been collected and entered. These tools check the data against the established standards and flag any data that doesn’t meet the requirements.

Step 4: Data cleansing

Once errors and inconsistencies have been identified, the next step is to clean up the data. This process involves removing or correcting data that doesn’t meet the established standards. It can be time-consuming, but it’s critical to ensure that the data is accurate and reliable. This is done by creating and nurturing a culture of engagement and mutual support among the stakeholders. Data quality is a participation sport. Ultimately how seriously those who sit on the front lines, data councils, and those they report to take responsibility for data quality will determine the success or failure of analytics and other initiatives that rely on data.

Step 5: Data integration and analysis

After the data has been collected, validated and cleansed, insurance companies integrate it into their systems and use it for analysis. This analysis can provide insights into customer behavior, risk assessment, and business performance.

Step 6: Ongoing data quality management

Quality management is ongoing. Insurance companies must continually monitor and maintain the quality of their data to ensure that it remains accurate and reliable. This involves regular audits, data profiling, and data cleansing. Constant engagement by producers (data entry) and consumers (underwriters, claim adjusters, actuaries and data scientists) of data in managing and adjusting data processes is the only way to ensure future success. Feedback loops that adjust operating practices and procedures with new insights and rules need to be created and actively used.

The big picture

Managing data quality is critical for insurance companies. By establishing data quality standards, ensuring accurate data collection and entry processes, validating data, cleansing data, integrating data, and ongoing data quality management, insurance companies can ensure that their data is accurate and reliable, leading to improved business performance and better customer service.

Angela Harter

Angela Harter is assistant vice president of Data Strategy and Quality Assurance at CLARA Analytics.

See also: