Insurers’ AI races ahead of rules
When it comes to insurance and the use of artificial intelligence, the main questions are “how much?” and “how fast?”
There is, however, one more vitally important question: What rules will insurers be following?
They are going to use AI up and down the delivery funnel. They enthusiastically admit as much in industry surveys. A lot of that usage is already happening.
Insurance was one of the first industries to use big data, said Scott Shapiro, principal, advisory, KPMG U.S., speaking during the recent KPMG Insurance Industry Conference.
The ability to take large sets of data, build even larger bodies of data around them and then query them in natural language was used by the industry years before ChatGPT captured the world’s interest.
The National Association of Insurance Commissioners is surveying each segment of insurance to quantify exactly who is using AI, who wants to use it and what tasks they want it to do.
An initial survey of auto insurers found that 88% of insurers currently use, plan to use or plan to explore using artificial intelligence or machine learning as part of their everyday operations. Seventy percent of home insurers said they plan to do the same.
That leaves the question of the rules. While technology evolves fast, regulators work at a more leisurely pace. Although some states are moving faster to establish rules governing AI and big data usage, so far, the NAIC remains in a survey-and-study stage.
An NAIC working group published its Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers in July. Comments were accepted through Sept. 5.
The bulletin is not a model law or a regulation. It is intended to “guide insurers to employ AI consistent with existing market conduct, corporate governance, and unfair and deceptive trade practice laws,” the law firm Locke Lord explained.
‘Guiding principles’
In August 2020, the NAIC adopted guiding principles on artificial intelligence after lengthy discussions. Regulators added language encouraging insurers to take proactive steps to avoid proxy discrimination against protected classes when using AI platforms.
There are numerous examples of how big data algorithms discriminate against communities of color. For example, a 2018 study by Consumer Reports and ProPublica found disparities in auto insurance prices between minority and white neighborhoods that could not be explained by risk alone.
Consumer advocates were encouraged by the principles and the gesture to fight proxy discrimination in AI use. But when the model bulletin appeared in July, those things were missing, said Birny Birnbaum, executive director of the Center for Economic Justice, in comments sent to the NAIC.
The draft bulletin lacks teeth in general, he added, in one of 28 comment letters the NAIC received.
“In place of guidance on how to achieve the principles and how to ensure compliance with existing laws, the draft tells insurers what they already know: AI applications must comply with the law and insurers should have oversight over their AI applications,” Birnbaum wrote. “The draft bulletin fails to provide essential definitions — it doesn’t even define proxy discrimination. It not only fails to address structural racism in insurance. It incorrectly tells insurers that testing for protected class bias may not be feasible.”
A principles-based approach to racism in insurance is not necessary or desirable, Birnbaum said. The model bulletin provides “no guidance regarding the actual outcomes a regulator expects and how the insurer should demonstrate those outcomes,” he added.
“AI system governance should start with the outcomes desired by regulators through the AI principles, and testing to measure and assess the outcomes for fair and unfair discrimination should be the foundation of the governance,” Birnbaum explained. “You can’t evaluate something unless you measure it, and the draft bulletin offers no metrics or methods for measuring outcomes.”
Industry comments focused on the cost of oversight, particularly with third-party providers.
The draft model requires that third parties meet the standards expected of insurers. Allowing insurers to include a provision in their third-party contracts that they have the right to audit third parties could be helpful, the National Association of Mutual Insurance Companies said in a comment letter, although “many vendors may be resistant to such provisions.”
“Eventually all third parties will be using AI, so this effectively requires the industry to audit every third party even in cases when due diligence does not uncover an issue or there is low risk,” wrote Andrew Pauley, public policy counsel for NAMIC. “This would be extremely costly and resource intensive.”
The NAIC had not announced the next step for the model bulletin when this issue went to press.
Colorado passes own law
The Colorado Division of Insurance became the first state to go ahead with its own AI rules for life insurers. The state recently passed new rules prohibiting life insurers from using data and technology in any way that could lead to unfair discrimination with respect to race, gender and protected characteristics.
Taking effect Nov. 14, the regulation applies to all life insurers that do business in Colorado. It covers a wide range of technology life insurers are using in underwriting including algorithms, predictive models, and online data collection.
Public comments softened the final rule somewhat, the law firm Mayer Brown said in its analysis.
“The final regulation has a more focused scope than what was initially proposed, with a clear focus on unfair discrimination based on race,” the law firm wrote.
All life insurers authorized to do business in Colorado are required to submit a progress report regarding compliance with the regulation on June 1, 2024, and must submit a report attesting that they are in full compliance on Dec. 1, 2024, and annually thereafter.
While Colorado took the lead on AI regulation, it is not alone. New Jersey, Rhode Island and New York are considering bills that would regulate the use of AI by insurers. At least two dozen states have passed rules governing commercial AI use.
Still, the prospect of new rules and regulations is not expected to slow the steady growth of AI use by the insurance industry.
Insurance executives should look at all aspects of the business to determine where AI can make processes smoother, said Jeanne Johnson, advisory principal with KPMG.
“I think we start with going across the value chain — from marketing to service to claims — and look at where things get stuck because you need more information and you need to move the needle,” she said. “For example, you can use AI to enhance risk profiling, and that could enhance underwriting. You can give better insights to your agents so you can get better qualifying leads.”
Managing Editor Susan Rupe contributed to this report.
InsuranceNewsNet Senior Editor John Hilton has covered business and other beats in more than 20 years of daily journalism. John may be reached at [email protected]. Follow him on Twitter @INNJohnH.
Helping consumers understand long-term care insurance
Building a secure financial house — With Rith Nou
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News