Examining the risks and rewards of ChatGPT for the insurance industry
Artificial intelligence has the ability to sort through and synthesize vast amounts of data, but with that benefit come increased risks for users and their insurers.
You’d basically have to be living under a rock without internet access to not have heard about OpenAI’s new large language model, ChatGPT, which has the human-like ability to “chat” with customers and others and this podcast discusses some of the ways it can use artificial intelligence to search, review and summarize large amounts of data.
The rapid acceptance and adoption of ChatGPT worldwide is an excellent example of an emerging risk and insurers are just beginning to identify the pros and cons of this technology. However, there are concerns that utilizing ChatGPT could violate some data protection regulations and use personal or confidential data such as trade secrets, creating unintended liability for companies.
Andrew Schwartz is an analyst at Celent who has been researching and monitoring some of these developments and their impact on insurance and other industries. Etay Maor is the senior director of security strategy for Cato Networks, and they share their insights on the possibilities and problems with using ChatGPT in the latest Insurance Speak podcast.
Large language models came into use in 2018 and while ChatGPT is not necessarily the biggest or the best, it has generated a much higher profile than some other technologies. Schwartz shares that while he hasn’t seen a lot of use cases in the insurance industry to date, he expects that to change in the coming months. Areas where this technology could be beneficial to the industry include streamlining customer support interactions, the use of chatbots to answer questions, automating the claims process, analyzing policy options for agents and brokers, and even aiding in more accurate underwriting for various risks. It can even personalize sales pitches and help professionals manage their emails.
As with any type of new technology, there are also some inherent risks, such as the unintentional release of confidential information or even the manipulation of information and images. Maor says he’s seen discussions on the Dark Web by bad actors who are trying to use ChatGPT for more nefarious reasons. He warns that it could be used to create very effective phishing emails that can literally write in the style or manner of an individual without any grammar mistakes, create ransomware demands or be used for social engineering.
For more on the benefits and risks associated with using ChatGPT, listen to the podcast above or subscribe to Insurance Speak on Spotify, Apple Music or Libsyn.
Related:
ChatGPT’s cybersecurity implications: The good, the bad and the ugly