For years, the insurance industry was a leader in the field of business analytics. In the early days, this referred primarily to descriptive analytics, like reports, dashboards and scorecards.
Regardless of the complexity or level of functionality introduced by new business intelligence tools, these analytical solutions had one major flaw: They were always backward-looking. They could only tell a story about what had happened in the past.
Over the last few years, interest has grown in predictive analytics. The reason is obvious: If insurers can predict what will happen in the future on particular policies and claims, then they can be better positioned to maximize profits, reduce costs, and improve customer satisfaction. In fact, the whole basis for the insurance system relies on insurers' capability to predict with a certain level of certainty what is likely to happen in the future.
The Early Years
In the claims realm, fraud detection was heralded as an obvious choice for using predictive modeling. Many companies have tried various ways of predicting the likelihood of fraud occurring on a claim. Early efforts were met with mixed results and limited success. It is useful to examine why these models typically failed to exceed expectations.
In early models, data was often sourced from existing data warehouses that had originally been built for reporting. The data that is useful for dashboards and scorecards was a logical place to start, but this information was incomplete. It most often consists of dates, dollar values and categorical variables. It almost never contains text data. Although it may contain information from multiple operational systems, it generally only includes information from core processing applications like the claim system(s). While there might be enough data to build a predictive model, these limitations are significant and can greatly affect the performance and results of the model.
Another early challenge in attempting to predict fraud was the reliance on singular methods of detection. Upon inspection, many early "models" were really nothing more than combinations of business rules. More advanced insurers used supervised predictive modeling where previously known fraudulent claims are used to train a model to detect suspicious future claims. This approach, however, is also flawed in that it is specifically designed to find the same types of fraud that an insurer has historically found. Fraudsters are adept at circumventing rules and thresholds. Furthermore, they are constantly modifying their scams and inventing new ways to game the system.
These limitations hampered early forays into analytical fraud detection. The false-positive rate was high, and the model failed to help insurers identify emerging fraud scams. So some carriers shelved the model altogether or relegated it to a safety net merely to catch claims from falling through the cracks when adjusters were unable to manually identify a few red flags.
The Tipping Point
The p&c insurance industry is now at a tipping point. A recent study indicated that almost half of all insurers are investing in new predictive modeling projects in 2013, driven by the increased availability of internal and external data, more sophisticated modeling tools, and a broader range of business issues that can benefit from predictive modeling.1
A recent surge in core system replacements has resulted in better data quality for many insurers. For example, one of the benefits of upgrading a claim system is better field-level validation, greater opportunities for prefill and integration with other systems, and easier data entry. All of these upgrades make it more likely that adjusters and claims processers will input accurate information. The result of this change is a greater volume of higher quality data that can be used for modeling.
The modeling tools are improving as well. Previously, building a predictive model required a lot of coding and knowledge in statistics, programming and specialized training in the specific application being used. Since that time, there have been tremendous improvements in several key areas. Data management tools now make it easier than ever to integrate multiple data sources and quickly address data quality issues. Model building is still a skill that requires advanced expertise but the tools have become more user friendly, with graphical user interfaces and built-in support for the most common techniques. Finally, model management utilities have been introduced, allowing organizations to more easily maintain multiple production models with lower overhead.
Five Keys To Success
With a renewed focus on analytical methods for detecting fraud, insurers now have the opportunity to achieve desirable results. There are several key areas to consider when implementing an analytical fraud detection program. Here are five steps insurers need to take:
- Don't underestimate data management. With any modeling exercise, the results are wholly dependent upon the type and quality of source information. In many cases, more than half of the work for a fraud detection project can occur during the data preparation phase. This step may involve integrating data from multiple internal and external sources, addressing data quality issues like misspellings and missing values, and correctly matching entities from different systems.
- Incorporate unstructured text sources. Within core systems, a huge percentage of information is stored as text. Especially on long-tail injury claims, claim notes and customer service logs contain crucial information for accurate fraud detection. In some models, up to half of the variables might come from these sources. Furthermore, as insurers gain access to information sourced from social media, text analytics, content categorization and semantic analysis are essential to derive value from comments and posts.
- Use multiple detection techniques to identify both known and emerging schemes. Don't rely on just one method of detection. Business rules and supervised modeling are good at identifying known fraud schemes, while anomaly detection and network analysis are better suited to detecting emerging exposures. Using a combination of these methods can help cover all the bases. Separate detection scenarios can be built for different points in the claim life cycle to yield the best results. And additional detection scenarios can be focused on identifying suspicious organized networks to produce leads for complex case teams, as opposed to singular claim models that can support traditional SIU teams.
- Think about how users will consume the model results. Building a model is only half the battle. Consideration must be given to how users will receive and review the results of the model. Robust implementations will provide an alert queue for evaluation by a triage team, along with functionality to review the details of associated claims and entities. The system should also allow users to share intelligence and document the disposition of the alert, as this information can be useful for monitoring model performance. More advanced implementations can involve integration directly back into the core processing system.
- Maintain the models. Predictive modeling is not a one-time exercise. Models experience decay over time. In order to maintain satisfactory results, models need to be tuned and periodically updated. Source-system modifications, access to new information, changes in underwriting risk appetite, legal and regulatory developments, and shifting fraud patterns can all influence model results. It is important to routinely monitor the performance of the fraud detection system and plan for periodic updates. Most models can be updated every 12 to 18 months but in frequently changing environments, six-month updates might be more appropriate.
The old days of simply relying on claims adjusters to notice red flags are over. Adjusters will always remain an integral part of the fraud defense system. However, given adjusters' increasing caseloads, less experience and greater performance demands, it is unrealistic to expect them to adequately identify increasingly complex fraud exposures. Predictive analytical solutions have evolved and the insurance industry has reached a tipping point where manual methods are no longer sufficient. Those insurers who embrace an analytical approach will be ahead of their peers. Those who lag behind risk becoming soft targets.
Footnote
1Operationalizing Analytics: The True State of Predictive Modeling in Insurance. Strategy Meets Action. July 2013.
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.