A look at the risks and successes of using generative AI
Andrew Schwartz and Keith Raymond of Celent co-authored a new study that examines many of the issues associated with the use of generative AI in insurance.
The adoption and acceptance of Generative AI has occurred at lightning speed. ChatGPT, which launched in November 2022, garnered more than 100 million active users in the two months after its launch and now has approximately 1.8 million visits a month. By comparison, the social media site once known as Twitter, took more than two years to reach the same level of usage.
And, as with any type of new technology, there are positives and negatives associated with its use. Some insurers have banned ChatGPT and other generative AI in their companies until they have a better understanding of the inherent risks associated with its use.
Researchers at IBM released a report that found ChatGPT and other large language models could be used for more nefarious purposes such as writing malicious code or providing dangerous advice that made companies more vulnerable to cyberattacks instead of protecting them. For insurers, understanding the risks associated with the use of AI and how to price coverage to address them is vital.
The research and advisory firm Celent recently released a new report that examines many of the issues associated with the use of generative AI and large language models. Co-authors, Andrew Schwartz, an analyst with Celent, and Keith Raymond, principal analyst, Americas with Celent, share some of their insights from the study in the latest Insurance Speak podcast.
Schwartz explained that the reason for ChatGPT’s quick adoption is the ease of use “for a lot of folks who might not have been in the artificial intelligence of technology field…you don’t really need to know programming or have some sort of special application to access it.”
Raymond shared that “there are numerous other offerings out there…that have come out since the release of ChatGPT, probably at this point hundreds.”
This type of AI expands the capabilities for businesses said Schwartz. “I think the capability has really proceeded to have numerous applications for areas like customer service, marketing and content creation…These models are used to automate some customer interactions which can obviously free up humans to handle more complex tasks.”
The adoption by insurers is increasing and Schwartz said that respondents to their survey shared that 10% had large language models in production and that 50% shared that they expected to either have something in a test phase or in production by the end of the year.
For insights into some of the risks associated with the adoption of large language models and other AI, listen to the podcast above or subscribe to Insurance Speak on Spotify, Apple Music or Libsyn.
Related:
Examining the risks and rewards of ChatGPT for the insurance industry
Exploring the buzz around generative AI for insurers