AI usage outpacing regulation, NAIC panel told
The use of AI has skyrocketed in the few short years it has been in existence, and its usage has reached the point where it is outpacing regulation, NAIC’s Big Data and Artificial Intelligence (H) Working Group was told.
The panel met during the National Association of Insurance Commissioners’ fall meeting to hear how AI is being used for health utilization management as well as underwriting and claims management.
Lauren Seno, of NORC Health at the University of Chicago, told the group that health insurers are increasingly using AI in health utilization management, particularly prior authorization. But, she said, AI is missing the proper safeguards that are needed to protect consumers. Some states have begun to regulate the development and use of AI in health insurance but have not been able to keep up with AI’s proliferation.
Health plans see AI’s potential in several aspects of care management, Seno said, including reducing the administrative burden, freeing health care professionals to perform higher-level tasks and speeding approvals.
But as AI tools are developed and deployed to make coverage decisions, concerns arise. Seno said that in the absence of comprehensive regulations for the use of AI in health insurance, stakeholders identified some potential risks to care delivery and health outcomes. Those potential risks include:
- Tools trained by biased datasets.
- Algorithms developed with misaligned incentives.
- Machine learning systems developing their own processes.
Regulators must ensure that health insurance companies place humans with the appropriate clinical training and authority at the center of decisions that impact patient care, said Lucy Culp of the Leukemia and Lymphoma Society.
Consumers have concerns about AI
AI is streamlining the traditional insurance underwriting process but consumers have concerns about its use, said Zhiyu Chan of the University of Illinois.
Consumers are concerned about algorithmic bias, transparency issues and privacy, he said. This concerns lead to the following questions:
- Should consumers have the right to dispute unfair decisions?
- Should insurers reveal what information was fed into their AI system?
- Should insurers inform consumers about how the AI system is trained?
In home and auto claims management, AI can review and process claims by analyzing documentation, images and past claims history, Chan said.
But fraud actors may gradually understand how an AI system judges a claim, he warned. For example, by observing which types of claims are easily approved and which are frequently rejected, fraud actors can infer the operating rules and judgement criteria of the AI model. After these patterns are identified, some fraud actors may intentionally adjust their claims to conform to the model’s “preferred” patterns. This increases the probability of the claim being approved or exaggerating the extent of the damage without arousing suspicion.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Susan Rupe is managing editor for InsuranceNewsNet. She formerly served as communications director for an insurance agents' association and was an award-winning newspaper reporter and editor. Contact her at [email protected].
CMS algorithms unfairly target agents, HAFA CEO says
P/C market showing signs of stabilization
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News