‘Explainable AI’ in insurance: 5 best practices & 4 major challenges
As increasing use cases of AI in insurance add urgency to the need for explainability and transparency, experts are recommending "explainable AI" best practices to follow and key challenges to look out for.
“Models are going to vary, but explainability is generated through being able to demonstrate the datasets that the AI is being trained on, the correlation drivers that are creating the different outcomes. In the simplest terms, it’s quite literally showing the math [and] simplifying the explanations and models to where they can generate a level of transparency and predictability,” Peter McMurtrie, partner and insurance practice lead, West Monroe, said.
“The fundamental reason a business leader should be aware of this concept is to get value from the investment that they’re making into AI, GenAI, into any of the analytics that they’re doing,” Mike Fitzgerald, insurance industry advisor, SAS, added.
Explainable AI best practices
To this end, McMurtrie and Fitzgerald recommended insurers follow these five explainable AI best practices:
- Have an AI inventory
- Use the right tools
- Incorporate human oversight
- Test systems constantly
- Don’t get too sophisticated
However, they cautioned that there are challenges to be considered as well. Those challenges include:
- Balancing limitations with sophistication
- Building trust
- Training staff
- Getting buy-in from executives
AI inventory
Fitzgerald said insurers must start with having an inventory of AI tools being used, and then manage and carefully monitor that inventory for changes, output quality and more.
“You have to have the ability to go and look at the output of these models and identify the change in the output, which is out of the one or two standard deviations from the norm to be able to say, ‘We really need to look at this,’” Fitzgerald said.
He noted that automated processes can be implemented to identify what should be checked, such as for factors like unfair discrimination, bias or a difference in conditions or output.
Management, oversight and testing
Specialized, purpose-built model management or model risk management tools can support inventory management, Fitzgerald said. However, he underscored that humans must be kept in the loop.
“You don’t hand all of this off to the technology; you change the way that you’re managing your analytic models through your processes by using a mix of both human as well as technology,” Fitzgerald said.
He added that testing for explainability and other positive factors should be a core condition from the get-go. This, too, can be automated, he said.
“Explainability needs to be a test condition when they’re testing a model, not just to see if it works, if it uses the data and all of that, but explainability, transparency all of those need to be tests that are conditions which are put in at the time of testing,” Fitzgerald said.
Balancing sophistication
McMurtrie explained that getting the right balance of sophistication in an AI model is crucial. He noted that while an overly sophisticated system is more difficult to explain, simpler AI systems can limit the scope of what can be accomplished.
In his view, “if you want to maintain explainability, then you’re giving up something on complexity” but “if you want to leverage complexity, then that makes it harder to have it in an explainable model.”
“The biggest challenge is, I think, it limits the power of what the tool is going to do for you. There are many uses of AI that don’t require sophisticated, complex decisioning. But it’s an efficiency play in terms of how it’s being leveraged,” McMurtrie said.
Training and trust
McMurtrie also suggested insurers not shy away from leveraging transparent AI as a way to build trust with customers and even increase their openness to using AI.
At the same time, he emphasized the need for insurers to train their staff to the point where they are comfortable with using AI.
“It’s equally as important for companies to be ensuring that their associate base, their employee base, understands how the company is using AI. Not saying they have to understand the models in detail, but having that awareness and confidence in how it’s being used so that they can project that same confidence when they’re interacting with customers,” McMurtrie said.
Buy-in
Getting buy-in from executives may be a different story, however, as Fitzgerald said investing in explainable AI and appropriate monitoring tools is “a difficult case to make because the return on investment is not always a straight line.”
He explained that the benefit of explainable AI is “avoiding trouble” more than anything else, and that can be difficult to value.
“I can tell you from my business analyst days, walking into a CFO who’s running a business unit for a profit, and you start talking to him or her about the soft costs involved because we will avoid fines or we will avoid [low] market conduct grades, they don’t want to hear that. And so, it's difficult to make the case,” Fitzgerald said.
SAS is a software development company founded in 1976 and based out of Cary, NC. With over 400 offices around the world, SAS provides business analytics and AI solutions to clients.
West Monroe Partners is a business and technology consultancy based in Chicago, Illinois. Founded in 2002, the firm has more than 2,000 employees in 10 offices around the world.
© Entire contents copyright 2025 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Anthem BCBS of NY seeks dismissal in ‘ghost network’ case
As tariffs roil market, separate ‘signal from the noise’
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News