AI sparks new era in empathetic workers' comp claim management

The best uses of multi-modal AI in workers' comp will strike a balance between tech innovation and humanity.

Leveraging AI’s capabilities to augment human expertise can help create a workers’ comp system that is both efficient and empathetic. (Credit: Freer/Adobe Stock)

Multi-modal AI models, such as OpenAI’s ChatGPT 4.0 and Google’s Project Astra, mark a significant advancement in artificial intelligence. The potential of these advancements is particularly promising for complex fields such as workers’ compensation, where the claims process can be challenging and involve numerous factors beyond the physical injury itself.

Consider Maria, a 35-year-old warehouse worker who recently suffered a back injury on the job. Not only is she grappling with the physical pain of her injury. She’s also facing the isolating effects, financial strain and logistical hurdles that accompany such incidents. Maria’s experience is a reminder that factors beyond the injury itself — what are known as social determinants of health (SDOH) — can significantly influence claim outcomes, often prolonging recovery and increasing costs.

This new generation of AI models, which uses not just text but integrates voice and visual data, have generated widespread interest throughout the insurtech industry, with experts predicting transformative applications. By incorporating voice and visual cues, these upgrades offer a more empathetic and holistic approach to claims management, proactively recognizing emotional distress and facilitating clearer communication throughout the process.

Integrating SDOH with multi-modal AI

Historically, the workers’ compensation claims process primarily centered on the physical injuries sustained at work. However, it has become apparent that these SDOH — factors such as socioeconomic status, health care accessibility, education and social support — are critical to recovery. This is underscored by Healthesystems’ most recent Workers’ Comp Industry Insights Survey Report, which revealed that over 45% of industry experts consider SDOH to be among the biggest barriers to injured workers’ recovery.

“In workers’ compensation, the path to recovery is often a journey through a complex landscape, where social and economic factors can be as impactful as the physical injury itself,” says Claire Muselman, an industry expert in workers compensation.

Multi-modal AI has significantly improved upon the limitations of earlier AI models, now effectively capturing the nuances of human emotion that were previously difficult to interpret. By integrating audio and visual data analysis alongside text, it can interpret a broader spectrum of communication cues, leading to a deeper understanding of human expression and intent.

This next-generation of AI has ingested a vast array of information — from recorded phone calls and video interviews to facial expressions and vocal inflections — to identify patterns and correlations that might escape human notice. By leveraging natural language processing and emotional intelligence, it can recognize the meaning behind words, interpret a person’s emotional state, and even detect subtle signs of psychological distress or social challenges.

How does this translate into benefits for injured workers? Let’s revisit Maria’s case to see how multi-modal AI can address the SDOH that impact her recovery.

A new approach for claims processing

From the moment Maria’s supervisor submits her First Notice of Loss (FNOL), the AI-driven workflow kicks in. Maria’s verbal reports are analyzed, going beyond the facts of her injury to consider the underlying environmental, social and emotional context. With this information, the system quickly assesses the complexity of Maria’s case. Recognizing the need for an empathetic approach, Colleen is selected, an adjuster known for her strong interpersonal skills and ability to handle cases that require a deep understanding of social and emotional factors.

Before their first conversation, Colleen reviews Maria’s case file, noting her physical injury and some of the challenges she might be facing, particularly feelings of isolation and the lack of a nearby health care provider. During their discussion, the AI subtly guides Colleen, prompting her to inquire about Maria’s potential challenges and seamlessly suggesting personalized solutions. These include a light duty program to address feelings of isolation and provide some financial relief as well as suggesting telemedicine as an alternative option for accessible health care.

Following the call, Colleen receives a report summarizing key discussion points and offering insights into how she can further refine her interactions. This continuous feedback loop empowers Colleen to grow professionally, ultimately enabling her to provide even better support to claimants like Maria.

The principles and technologies applied to Maria’s situation are now being used to enhance the claim process across the board.

Transforming claims with multi-modal AI

To evaluate and perform claim triage, AI is utilized to analyze injury reports and witness statements. Unlike before, when these were primarily written documents, recorded verbal statements can now be leveraged. Multi-modal AI not only transcribes these statements into easily digestible text but also extracts valuable insights from verbal cues. This enables immediate recognition of whether a claim can be auto-evaluated or requires the expertise of an adjuster. If an adjuster is needed, the AI intelligently matches the claim with the most suitable candidate. This level of detailed triage streamlines the process, ensuring injured workers receive the right care at the right time, ultimately improving recovery outcomes and operational efficiency.

Beyond triage, the gathered information paints a comprehensive picture of the injured worker. Initially based on FNOL details, this profile evolves as more data — both structured (e.g., medical results from new doctor visits) and unstructured (e.g., adjuster-claimant conversations) — is incorporated. This enables the creation of comprehensive risk profiles that inform proactive management strategies, allowing adjusters to anticipate potential complications and connect claimants with appropriate resources early on. This approach enhances individual outcomes, fosters trust, and reduces the likelihood of disputes or prolonged claims.

The power of AI-driven insights also extends to the professional development of adjusters. By analyzing interactions between adjusters and claimants, AI can provide valuable feedback, highlighting opportunities for improving communication styles and fostering greater empathy. This personalized coaching is instrumental in helping adjusters navigate complex claims more effectively. Furthermore, the AI can identify successful interaction patterns, creating templates for best practices that can be used to train new adjusters.

Muselman emphasizes the transformative power of this technology.

“Multi-modal AI offers an unprecedented ability to interpret and respond to the nuanced needs of injured and unwell workers, facilitating a more personalized and empathetic claims process,” Muselman says. “This technology not only streamlines operations but also ensures that injured workers receive the emotional and logistical support they need, ultimately leading to better recovery outcomes.”

Integrating multi-modal AI into claims systems

Like with any new technology, there are challenges. Successfully incorporating multi-modal AI demands a comprehensive reevaluation and adaptation of existing workflows and processes. Navigating this transition requires strategic investment in both external partners and internal expertise. Simultaneously, companies need to prepare their workforce for new ways of working, ensuring seamless integration with existing systems while also staying informed of the ever-evolving AI landscape.

“Adapting to new AI-driven tools is not just about updating systems,” said industry expert Dean Clifton, senior vice president at Benchmark Administrators. “It’s about fundamentally reshaping our approach to work.”

The future of workers’ comp and AI

The workers’ compensation industry continues to face rising costs, complex claims, and the need for a more holistic, person-centered approach. Multi-modal AI offers a possible path forward. By harnessing this technology, the industry has the opportunity to create a more compassionate, efficient, and empathetic claims management system that does not disregard the well-being of injured workers.

The potential benefits are there: improved claim outcomes, faster recovery times, and increased claimant satisfaction. By addressing both the logistical and emotional needs of injured workers, this technology can play a critical role in transforming the industry.

The most successful implementation of multi-modal AI in workers’ compensation will strike a balance between technological innovation and human experience, insight and compassion. By leveraging AI’s capabilities to augment human expertise, we can create a system that is both efficient and empathetic, ultimately leading to better outcomes for everyone involved.

Tycho Speekenbrink is head of AI and John Peters is co-founder and chief science officer at Gain Life, makers of a claims communication platform that integrates behavioral economics with artificial intelligence.

Related: