While predictive modeling has been used for years in underwriting, with ever-increasing sophistication and accuracy, it has now moved into other areas such as marketing and claims.
This growth stems from a combination of the wealth of data companies possess on their customers and the processing power now available across the board; statistical routines can be data intensive and only recently have 'mainstream' hardware and software been able to handle the workload.
To evaluate how to utilize predictive modeling most effectively, efficiently, and with the maximum amount of flexibility, insurance carriers need to consider the following key guidelines:
- Getting started—Utilize a narrow, focused approach to identify a target area, and generate the best results before broadening the approach further while also making sure the design is scalable to allow for quick expansion
- Availability of data—Review the availability of good, usable (not necessarily perfect) data, both internal and external to the organization, and including both historical and ongoing information
- Incorporate of models—Ensure that models are designed and incorporated into business processes and don't simply comprise of analysis of historical data after the fact
- Model management—Create a discipline around not only the initial model development but also around the on-going model maintenance
Leveraging the Models
What can predictive models do for an insurance company? By leveraging predictive modeling throughout the organization, insurers are able to more accurately predict and respond to events during the life cycle of a customer, from the initial prospect and acquisition through the entire life of the policy. If a claim occurs, insurers are using predictive models to more accurately identify the best next action, resulting in an increase in customer satisfaction while also reducing cost. With this increased use of predictive modeling comes the challenge of how to deploy multiple models within the organization. It is important to understand that in order to be successful, an organization must design its predictive modeling infrastructure in such a way that it is scalable and adaptable to multiple areas within the enterprise.
A common challenge that is always raised by those who are unfamiliar with predictive modeling is the effectiveness of rules-based systems versus predictive models. Many organizations use rules to guide decisions and some feel that this is enough. While there is certainly a place for the use of business rules within operational workflows, only relying on these is extremely inefficient and difficult to manage. More often than not, rules become outdated quickly and over time lose their applicability to the current business environment. Integrating predictive models lessens the complexity by allowing the model itself 'manage' and adjust the things that rules have historically tried to do (i.e. relationships, cause/effect,…).
Incorporation into Business Processes
While it has been proven there is value in using predictive modeling in underwriting to predict future performance, the true power of predictive modeling is recognized by incorporating it into real-time business processes. This enables an organization to react with greater precision during various customer interactions, not just after the fact.
By integrating models within workflow, an organization is able to exploit the computational power now in play, allowing them to react much more effectively to changing conditions. It allows the process to be tailored to the circumstances of not only the current event but also events of the past. This is what is referred to as the vertical meeting the horizontal; the vertical being historical information and the horizontal being the current situation. This is a significant advantage over rules-based systems that generally only look at the current situation.
As an example, let's take an insurance company that wants to review its automobile physical damage repair estimates. The company wants to analyze estimates to identify potential leakage. Using 'traditional' methods of reviewing these systematically, a rules-based approach would have been used to check items such as frame labor hourly rate or total estimate value. These values would come directly from the estimate itself or calculated from estimate data. The rules themselves would be maintained by the company and, depending on size of the company, would have to be different for different locations.
The rules would look at characteristics of the estimate and then provide an output based on how the estimate scores. With a predictive model, not only is the current estimate data fed into the model, but so is historical information. This historical information allows the model to weigh input data and enables the output to be tailored to what has been happening in real-life. By doing so, not only is the model output more representative of what is actually happening, depending on the type of model used, the model itself, through learning, essentially maintains itself over time.
Another aspect of incorporating predictive models into business processes is to identify the proper point in within the process to actually run data through the modeling engine. For bodily injury claims, it may make sense to regularly evaluate an open claim whenever an adjuster performs an activity on the claim. The analysis may determine if it is the right time to settle a claim.
This answer may change each time an analysis is run because of new data being fed into the model. Because of this, it is important than an organization understands its business processes and documents them prior to incorporating predictive models into them. Once that is done, analysis can be done to determine the impact of the new system to the existing processes and select the best placement of both the invocation of the model analysis and the output of the model.
Just as important as doing the as-is business process modeling, it is equally important to ensure the processes are updated to reflect where and how predictive modeling has been incorporated.
Getting Started
If an organization has not yet deployed predictive modeling into the enterprise, it is best to begin in a focused manner and target an area that needs immediate improvement. Doing so enables the organization to quickly confirm the value of the predictive model and build a case for more extensive use of predictive modeling within the organization.
For example, targeting claims leakage. While focusing on one area initially is a conservative, safe approach, it is important that the modeling architecture be developed in such a way that the system is not a single-use, fit for purpose design. It needs to be agnostic and scalable so that other models can be developed, not just one area. It also needs to be able to handle concurrent models and processes.
Ideally, you would segregate the modeling 'engine' data from the business data. Doing so enables you to manage multiple models independently from the business data itself. By taking this long-term approach from the onset, there is minimal rework necessary for the system as different areas of the business and new models are brought into the fold.
Focusing on one area also allows the organization to gently introduce the concept of predictive modeling to the organization itself. The use of predictive models outside of underwriting (i.e. those who use statistics day in and day out) often causes confusion and uncertainty about what exactly is “under the hood.”
Care should be taken to educate the organization on the basics of predictive modeling and ensure that there is adequate buy-in. System design should ensure that the output or recommendations from the modeling engine not be optional. Making the use of the predictive modeling output optional may result in a reluctance to use the technology and can ultimately prevent the organization from realizing the true potential of the system. This is especially true if the users feel threatened by the technology replacing their skill.
On the surface, it may seem simple to plug models into workflow. However, as with any new technology, there is an acceptance curve that must be addressed. Unlike rules-based systems that are easy to understand as to why certain recommendations or scores were provided, predictive models are more complex by nature.
As such, articulating exactly why a certain customer scores a certain way or a recommendation is made is difficult. This is because predictive models produce their output through complex relationships between input data as well as derived data within the model itself.
Availability of Data
It should be noted that the effectiveness of a predictive model is directly related to the availability and quality of input data. This includes first party and third party data. It is the old cliché – garbage in, garbage out. Depending on the type of model used, gaps or missing data may be acceptable. Because of this, it is important to understand what is available when determining which type of model to use.
It is also important to be able to have access to as much data about the customer and the other event participants as possible. For instance, with the example above, not only would you want customer data but you would also want to incorporate information on the person that writes the estimate, whether it be a staff member, a shop or an independent appraiser. Some of this information may not be available internally.
Because of that, you must source data from a third party to enable your model to better represent a true picture of actual events and to have a more complete picture of what actually happened. This will strengthen the insight provided by the model.
Various third party sources, such as rating bureaus and data aggregators will be able to provide information that can enhance the relationships within the model. This will strengthen the insight provided by the model and provide a more complete and accurate output.
Model Management
In order to have an effective predictive modeling system, you must have a strong model to begin with and, like any system, in order to maintain its effectiveness, it must be kept current. With a predictive modeling system, this is very important. A key part of the modeling process is not only the initial model development but also the validation of the model itself. In order to do this, an organization must have historical information available. This historical information is broken down into two parts. The first part is the sample data used to create the model. A modeler would use this data to establish the relationships and structure for the model.
Once the model is developed, it is validated using the second part of the historical data, the out-of-sample data. Typically, the sample data is the larger of the two sets and the out-of-sample data is the smaller.
In order to properly validate the model, not only is source data necessary but just as important, outcome data is required. The out-of-sample data is run through the model and the output is compared with what actually happened. This is used to determine whether the model predictions are accurate.
There are many options for the design and deployment of predictive modeling systems, from off-the-shelf systems to bespoke development. However, regardless of the route an organization takes, it must ensure that the right skills, specifically statistical, are available to not only provide guidance and input into the original design but, just as important, to keep the system current and make sure it doesn't become stale or outdated.
Using the same process as the initial development, after the system is put into production, the modeler would periodically check the model to check its health. As the business grows and as the environment within which the business operates changes, you must ensure the model still reflects the business.
For instance, using our example again, let's assume the company did not make use of independent appraisers when the model and system was developed. Once the system was in place and being used, management decided to expand operations and, as a result, decided to use independent appraisers to service customers in hard to reach places. Since the model used information about appraisal sources, it is important to incorporate the new source into the model.
Otherwise, the model no longer truly reflects the current environment and its results will become less accurate. If a neural network is used, updating the model may actually be incorporated into the system, either supervised or unsupervised. This can be especially useful.
Just as important as keeping the model and system up to date is keeping management informed on the effectiveness of the system. This is especially important in order to establish credibility for the model and to help build support for deploying predictive modeling to other areas.
Conclusion
In summary, the power of predictive modeling is more attainable than ever. By deploying this technology into the larger organization, insurance companies stand to gain operational efficiencies, achieve competitive advantages by being able to react to changing market conditions more quickly ultimately leading to an expansion of its customer-base. While it will take an investment initially, the paybacks are too significant to overlook the potential.
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.