How artificial intelligence will affect the P&C insurance industry
As AI continues to grow and develop, one expert weighs in on the impact it will have on the P&C business — and the world.
“The Rise of Artificial Intelligence: Future Outlook & Emerging Risks,” Allianz’s new report on how AI will impact society at large and the P&C insurance industry, makes some bold assertions: By 2035, application of AI technologies could provide an economic boost of $14 trillion. By the same year, AI technologies are projected to boost corporate profitability in 16 industries across 12 economies by an average of 38%.
These hypotheses, of course, assume full adoption of AI across all of those industries, and not every sector will embrace the artificial intelligence as an empowering solution with equal aplomb — or at least, the same depth of implementation.
Additionally, despite the advancements in machine learning, the power of which will only greatly accelerate over the next decade, ethical questions arise over just how much of processing that humans are willing to yield to AI-empowered tech.
That adoption rate will depend on the level of investment in research and development in each application field, says Michael Bruch, an environmental and risk engineer and Head of Emerging Trends and the Environmental & Social Governance (ESG) department of Allianz Global Corporate & Specialty SE (AGCS) — the Allianz entity for global business insurance and large corporate and specialty risks.
Examining AI’s ethical implications
Bruch, who spoke to me from Munich, spends his days assessing the ESG performance of P&C clients of the entire Allianz Group and seeks to fully integrate ESG factors into Allianz Group’s risk assessment, underwriting and product development processes. Among other things, he examines the ethical implications of machine learning and how it will shape how risks are written by the P&C industry.
“The key question is how we deal with data,” says Bruch. “Because data is the new power of the future.”
For businesses, the potential threats around artificial intelligence could easily counterbalance the benefits of such revolutionary technology. According to the Allianz Risk Barometer 2018, the impact of AI and other forms of new technology already rank as the seventh top business risk, ahead of political risk and climate change.
The report makes reference to “strong” AI, and it’s important to discern between that level of artificial intelligence and what we mostly see now. “Weak” AI, Bruch explains, is what you see when a machine is mimicking human functions or solves problems, such as when a computer plays chess.
“One step further is general intelligence, or consciousness, on top of that,” he says. An example of strong AI would be a machine capable of enrolling in a college course, not just learning and solving problems. That development is about a decade — or less — away.
AI in plain sight
The most common public-facing AI right now is seen in autonomous vehicles – which are seeing their own set of concerns around safety and the amount of responsibility the passenger in such a vehicle will have to actually assume. “It is estimated that AI could help reduce the number of road accidents by as much as 90%, but it also brings questions about liability and ethics in the event of an incident occurring,” says Bruch. “Medium and small accidents would be reduced, but human error will still be the biggest factor.”
In time, companies will face new questions of liability as responsibility shifts from humans to machines. A more interconnected world will mean more potential for larger-scale disruptions and losses, especially if critical infrastructure is involved. Increasing interconnectivity will yield greater exposures through the vulnerability of automated, autonomous or self-learning machines.
“If something goes wrong in this interconnected world, just imagine a power blackout caused by a cyber attack to a critically bottleneck a transformer,” Bruch says. Restoring those electrical systems would take even longer than it does now. “The impact of such a loss would be greater than what we’d currently see.”
Which is where the insurance industry comes in: Effective risk management strategies will have to be developed to maximize the benefits of AI as it’s introduced into society more broadly.
“We will always need underwriters, but they will work with predictive analytics,” says Bruch. In commercial lines, he notes, insurers are already using AI in form of chatbots and claims software, but now new modeling systems in predictive analytics are being developed further. Bruch envisions the P&C industry evolving from a provider in the event of loss to a loss-prevention consultant.
Traditional coverages such as liability, casualty, health and life insurance will need to adapt to protect clients. Insurance will need to better address certain exposures to businesses such as a cyber attack, business interruption, product recall and reputational damage borne of a negative incident.
Health care is the sector in which AI is expected to deliver the most societal benefit. For example, by using advanced data analytics, sequencing of human DNA could lead to the eradication of many incurable diseases. To hear Bruch tell it, the advancements in AI could also someday take much of the guesswork out of the writing of insurance risks – health risks in particular.
“It changes the way we think about health insurance,” Bruch says. Taking it a step further, he ponders, if you could easily predict what illnesses someone could get, is the solution we’re providing still insurance at all? “If we know that for certain, at the end of the day it’s like insuring a burning house.”
How should the public ‘feel’ about AI?
Bruch mentions how Swiss scientists are working on linking data to DNA strands – storing terabytes of data on heart rate of a human, for example, on actual cells. Manipulating genes, however, opens the door to more sinister applications, such as designing pathogens.
Likewise, AI could also enable autonomous vehicles, such as drones, to be utilized as weapons. In the end, if the goal is wider adoption of AI, how should humans “feel” about artificial intelligence, from an ethical standpoint? How can they best approach it in a way that’s mindful?
“Those ethical questions are known,” he says, “but they’re more important in the future when we’re taking AI to the next level.” Thoughtful dialogue will be needed across society, adds Bruch, and that cannot be a conversation relegated simply to companies. Regulatory bodies and expert-based agencies that ensure public safety would need to be put in place.
“The biggest question is, how strong will artificial intelligence evolve over time?” Bruch asks. “Some experts say its full potential is still decades away. Do we support its development in the right way, where we can face those ethical questions, so that this topic won’t get out of control?”