NY’s proposed AI rules seen as just the start for insurance carriers
In a likely harbinger of what’s to come from U.S. regulators, the New York Department of Financial Services recently issued proposed rules on how insurance carriers should use artificial intelligence and alternative data in underwriting and pricing.
The NYDFS circular letter said it expects carriers will establish governance protocols for AI systems and so-called external consumer data and information sources (“ECDIS”) and employ fairness testing of predictive models and variables before they are put into use. Currently, insurers aren’t required to abide by testing obligations when using AI in underwriting and pricing.
The proposed rules, released late last month, would apply to all lines of insurance: property & casualty, medical, life, auto, home, etc.
“The department expects that insurers use of emerging technologies such as artificial intelligence will be conducted in a manner that complies with all applicable federal and state laws, rules, and regulations,” the letter said.
The department conceded that the emerging technologies could both benefit insurers and consumers by simplifying and expediting procedures, and possibly result in more accurate underwriting and pricing. However, the letter also said the adoption of such technology may reflect systemic biases and its use could reinforce and exacerbate inequality.
“This raises significant concerns about the potential for unfair adverse effects or discriminatory decision-making,” the department circular said. “ECDIS may also have variable accuracy and reliability and may come from entities that are not subject to regulatory oversight and consumer protections.”
The letter said the self-learning behavior of AIS increases the risks of “inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes that may disproportionately affect vulnerable communities and individuals or otherwise undermine the insurance marketplace in New York.”
NY following EU example
New York is following the European Union which is implementing rules governing the use of AI, and Colorado has also proposed similar regulations for insurers.
They come on the heels of some major industry hiccups in the use of AI. Last summer, the health insurance giant Cigna Corp. was accused in a federal lawsuit of using a computer algorithm to automatically reject hundreds of thousands of patient claims without examining them individually as required by California law.
The lawsuit, which is seeking class-action status, said Cigna Corp. and Cigna Health and Life Insurance Co. rejected more than 300,000 payment claims in just two months last year.
The company allegedly used an algorithm called PXDX, or ''procedure-to-diagnosis,” to identify whether claims met certain requirements, spending an average of just 1.2 seconds on each review, according to the lawsuit.
“Relying on the PXDX system, Cigna’s doctors instantly reject claims on medical grounds without ever opening patient files, leaving thousands of patients effectively without coverage and with unexpected bills,” according to the lawsuit.
A similar suit was filed December alleging Humana used an AI model called nHPredict to wrongfully deny medically necessary care for elderly and disabled patients covered under Medicare Advantage. Still another suit charges United Healthcare also used nHPredict to reject claims, despite knowing that roughly 90% of the tools denials on coverage were faulty, overriding determinations by patient physicians that the expenses were medically necessary.
Experts said the biggest concern about AI in insurance is the use of third-party data and tools from unregulated vendors.
“New York is essentially saying the companies have to be responsible for the information and data sets they buy from third party sources,” said Phillip Dawson, head of AI policy at Armilla, which has developed an AI verification platform. “They need to audit and test those systems to make sure they are regulatory and actuarially compliant and valid.”
Just the start of proposed AI rules
Dawson predicts the New York circular is just the beginning of a wave of proposed AI insurance regulation in many states.
"What's most striking about the circular is the requirements for assessments of third party AI models and data sets (ECDIS), the granularity of the insurer's obligations in this regard: from the implementation of AI governance frameworks and detailed quantitative testing obligations, including specific metrics, the circular spells out some very clear expectations for insurers' use of AI, and that these expectations extend to third party AI tools. This is consistent with Colorado's regulatory approach to AI in insurance, as well as comments from the Federal Trade Commission that enterprises cannot punt AI risk assessment obligations to the vendors they buy from."
There’s little doubt that AI and similar technologies can potentially revolutionize the insurance industry, in fact may already be doing so, bringing huge advances in speed and cost-cutting efficiency in claims processing, underwriting, fraud detection, and customer service. Many insurers are using virtual assistants such as chatbots to try and improve customer experience. Chatbots can give basic advice, check billing information, and address common inquiries and transactions.
In addition, claims management can be augmented using machine learning techniques in different stages of the claim handling process, according to the National Association of Insurance Commissioners. Machine learning models can help quickly assess the severity of damage and predict the repair costs from historical data, sensors and images.
But there is also potential for things going awry, and the NAIC said it will continue to monitor the use of AI in the insurance industry and consider developing more regulatory guidance as needed.
“AI is inherently neither good nor bad, neither right nor wrong – it’s the human interaction, interpretation, and use of AI that renders it one way or another,” said Rose Hall, senior vice president and head of innovation at Axa XL, a leading provider of P&C global commercial insurance based in Connecticut.
Claims, underwriting are AI insurance targets
A new report from Reuters Events and Clearwater Analytics found insurance company investments in AI are largely being intended for use within claims or underwriting processes, given their ability to increase the efficiency of related tasks. Claims, in fact, was the most-mentioned function or division focusing on implementing generative AI today, followed by customer service.
“AI is the second-most popular technology being applied to underwriting, behind only Digital Portals, and above the average application figure across our pool of technologies of 26%,” the report said. “We can therefore conclude that organizations looking to improve the efficiency of their underwriting process are more likely to invest in AI than other technologies on our list.”
“AI in insurance is not a future concept but a present-day necessity,” said Souvik Das, chief technology officer for Clearwater Analytics. “As we stand on the precipice of a new era in technology, one thing is certain: AI is here.”
But regulators and others worry about the potential risks with AI, such as data breaches, security vulnerabilities, and algorithmic bias.
Bloomberg Research forecasts the generative AI market will grow to $1.3 trillion over the next 10 years.
“The insurance market's understanding of generative AI-related risk is in a nascent stage,” said a recent report on AI in insurance from Aon PLC, a British-American professional services and management consulting firm. “This developing form of AI will impact many lines of insurance including Technology Errors and Omissions/Cyber, Professional Liability, Media Liability, Employment Practices Liability among others, depending on the AI’s use case.”
Governance frameworks recommended
Aon recommends that organizations get ahead of the regulatory curve, and work with their technology experts, attorneys, and consultants to set policies and establish a governance framework that aligns with regulatory requirements and industry standards. Some components of that framework may include:
- Routine audits of AI models to ensure that algorithms or data sets do not propagate unwanted bias.
- Ensuring understanding of copyright ownership of AI-generated materials.
- Insertion of human control points to validate that the governance model used in the AI’s development aligns with legal and regulatory frameworks.
- Conducting a legal, claims and insurance review and considering alternative risk transfer mechanisms in the event of the insurance market begins to avoid these risks.
Doug Bailey is a journalist and freelance writer who lives outside of Boston. He can be reached at [email protected].
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Doug Bailey is a journalist and freelance writer who lives outside of Boston. He can be reached at [email protected].
Lifting communities by raising credit scores
Rates on the move as Fed vows to slash interest rates
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News