Experts warn insurance industry to buckle up for AI regulation, litigation
With litigation related to the use of artificial intelligence expected to increase, industry experts are strongly recommending American insurance companies to prepare themselves before that reality takes hold.
“Given our litigious society, it’s just a fact of life that, as an insurance company, if you’re going to be using AI, you’re going to be, unfortunately, subjecting yourself to at least some amount of regulatory litigation and reputational risk,” Scott Kosnoff, partner, Faegre Drinker Biddle & Reath, LLP, said.
In Kosnoff’s opinion, it’s “probably unlikely” that insurance companies will be able to completely avoid this risk. However, he believes they can mitigate it while waiting on standardized AI regulations, which may be a long time coming.
He said by being mindful of regulatory expectations and trends; creating and managing a risk management framework; testing for algorithmic discrimination; and “having a story” that demonstrates commitment to ethical AI usage, insurers can place themselves in the best position to handle negative outcomes if, and when, they arise.
‘Blood in the water’
During a webinar hosted by the Washington Legal Foundation, Kosnoff noted that several high-profile cases related to AI usage in insurance have already been filed — and more are expected to come.
“All those courts are being vigorously defended, and we’re just going to have to wait and see how that plays out in the courts,” Kosnoff said. “But I think it’s fair to say that the plaintiff’s bar smells blood in the water, and they see the use of AI by insurers as sort of a fertile field.”
While he said a lot of the AI-related litigation arising focuses on inaccuracies, the general expectation is that much of it will focus on algorithmic discrimination in particular. This refers to instances where automated systems are believed to contribute to "unjustified different treatment or impacts" that disfavor people based on actual or perceived classifications such as gender, race, ethnicity, etc.
“I would focus on the use of AI that’s going to have the greatest impact on consumers, because that’s where the plaintiff’s lawyers are going to look,” Kosnoff suggested.
This could be where insurance companies are using AI to price products; to review claims; to deny health insurance coverage that’s been recommended by doctors; to detect fraud; and any other areas where American insurance firms may have incorporated AI into their operations.
Mitigation is the solution
Despite the risks, Kosnoff said that avoiding AI use altogether is likely unavoidable, given competitive pressures for insurers and others to invest heavily in new technology.
“But, if you’re going to be using AI, I think it’s incumbent to do so smartly and to think not only about what could go well, but what could go wrong and what you can do to try to avoid some of those harms,” he said.
To this end, he suggested five steps insurance companies can take to mitigate against the potential of AI-related litigation:
- Pay attention to regulatory expectations
- Develop a risk management framework
- Periodically update their risk management framework
- Test for algorithmic discrimination
- Have a “story”
“At a minimum, I think if you’re going to be using AI, you need to stay on top of the evolving regulatory expectations coming from your regulators,” Kosnoff said. “You need to pay attention to the litigation that has already been filed, and you need to pay attention to what’s getting reported in the media, because when you worry about reputational harm — and I worry as much about that as I do litigation or regulatory action — paying attention to what gets reported on the front pages is important.”
Similarly to other industry experts such as those at law firm Locke Lord, Kosnoff strongly recommended insurance companies develop their own regulatory frameworks to govern AI usage and to guard against potential AI litigation. They can look to the National Institute of Standards & Technology AI Risk Management Framework, the National Association of Insurance Commissioners’ Model Bulletin and regulations being developed in Colorado and New York for guidance, he said. Additionally, they should regularly update their frameworks as new developments emerge and consider testing for algorithmic discrimination.
“I think the best that we can do right now is to have a good story to tell, and by that, I mean a story that credibly demonstrates to our regulators, to a judge in the worst-case scenario, to the New York Times — if they come calling, and they probably will — that our organization appreciates the concerns associated with AI, we’re taking those concerns seriously, and we’re acting with reasonable care to try to identify, manage and mitigate the risk of negative outcomes,” he said.
Kosnoff noted that no compliance program is perfect, and there is no one-size-fits-all solution, but that insurance companies must come up with a solution that works for their organization, based on its structure, AI usage, risk appetite, constituents, location and other specifics relative to each organization.
Faegre Drinker Biddle & Reath LLP is a full-service international law firm. Since its founding in 1849, it has become one of the 100 biggest law firms in the United States.
Rayne Morgan is a Content Marketing Manager with PolicyAdvisor.com and a freelance journalist and copywriter.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Rayne Morgan is a journalist, copywriter, and editor with over 10 years' combined experience in digital content and print media. You can reach her at [email protected].
When advising young investors, it’s all about ‘cutting through the noise’
How will SCOTUS Chevron ruling impact DOL’s fiduciary rule?
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News