Inside Nationwide's 'bionic business model'
Executive Vice President and CTO Jim Fowler considers generative AI to be pivotal to the company’s future.
Since joining Nationwide, Chief Technology Officer Jim Fowler has worked to foster what he calls a “bionic business model” in which machines work together with humans to propel the business into the future.
Here’s the current state of insurance technology as Fowler sees it:
- Insurance organizations have moved on from talking about “digital transformation” and are now engaged in the work of “digital innovation.”
- People in the industry no longer have the luxury of either specializing in insurance or technology; now all insurance professionals must be conversant in both.
- It follows that such digital-savvy insurance people are embracing generative AI for its power to enhance efficiency.
Fowler recently sat down with PropertyCasualty360.com to talk about Nationwide’s approach to digitalization and how the carrier plans to manage (and grow with) generative AI.
PropertyCasualty360: We don’t really talk about digital transformation in insurance anymore; we talk about digital innovation. How did we get to this point?
Jim Fowler: I think companies that chose not to deal with their legacy technology issues are going to struggle. They’re going to have a hard time actually making it in a digital world where all transactions are digital, and where customers want a digital interaction. I think in the insurance industry, you are seeing a consolidation of agents, brokers and financial services professionals into a smaller number of larger groupings. And one of the things that’s going to be true about every one of those intermediaries is they’re going to demand a frictionless transaction that requires digital technology.
PC360: That brings us to this point in time where the topic of the day is generative AI. What is Nationwide’s take on this tool, and what can customers, employees and partners expect to see from Nationwide as far as its application of AI?
Fowler: Artificial intelligence isn’t new to the industry. Nationwide, like most of the industry, has been working with artificial intelligence models that predict outcomes for the past 10-plus years. In fact, we established an enterprise analytics organization over eight years ago that is responsible for the development and management of those models.
What’s changed is two things. One: The coming of age of large language models; models that can scan large amounts of data and start to bring not just predictions but create content based on that large base of knowledge. And then, natural language processing. The ability to ask a natural language question or even be able to show a picture of something and have it use that large language model to create new content.
I’ve been doing this for 30 years. I’ve not seen any technology come up and be adopted as quickly as this. It reached a million users in five days. That’s faster than any other technology platform we’ve seen in history to date. And so Nationwide is really thinking about where this is going to provide a 10x advantage for both our own operations, but also more importantly, to be able to serve our customers in new ways.
…Our associates are already embracing the technology. We’re hearing from them that the tool is helping them be more efficient. Some examples we’ve seen: We’ve got associates who are creating first drafts of memos, and they’re turning team call transcripts into notes for sharing and condensing documents.
Within my software development team, they’re doing first drafts of software development code using large language models.
And so, inside Nationwide, we’re already seeing [staff] adopt this technology and use it to improve their own efficiency.
PC360: Some people are anxious about AI, about its power and its reach, and about what we don’t yet know about it. What words do you have for people who are feeling uncomfortable with how quickly this has been adopted?
Fowler: At Nationwide, this is driven from the top, from the CEO, his staff and my peers. We decided that this was a technology that is going to have long-term ramifications for the company, and we’re leading the initiative. We have a steering committee that we lead that drives the decisions. That committee has two branches. The Blue Team is where we see all of the great examples of how technology can be used to to better serve our members, to make our associates more productive, to take clerical work off the table so that our associates can bring what we believe they bring best to a process, which is empathy, knowledge and judgment.
But the second fork that we have is what we call The Red Team. The Red Team is thinking about the bad things that can come from generative AI? How are cybercriminals going to use it to try to break into companies electronically? How can it be used in an unethical way to drive behavior? How do we make sure that we don’t get bias built into decisions that are driven by the technology?
So what I would tell people is, companies that are approaching this in the right way are thinking not just about the benefits of generative AI, but they’re thinking about the risks that come with generative AI and making sure they’re addressing those evenly.
See also: