State insurance regulators found little agreement Thursday on language to address proxy discrimination in artificial intelligence.
The Innovation and Technology Task Force is the latest panel to work on the Principles for Artificial Intelligence document. The Artificial Intelligence Working Group voted unanimously to adopt the principles last month, including the phrase "AI users should proactively avoid proxy discrimination against protected classes."
The end goal is final adoption by the National Association of Insurance Commissioners' Executive Committee, laying the groundwork for national AI standards for insurance. But sensitive issues around discrimination is making a consensus hard to reach.
Iowa Insurance Commissioner Doug Ommen supports substituting language provided by North Dakota that would allow for different variables to be taken into account. He used the example of hypertension. While there is a proven link between hypertension and race, science also shows us other links, for example, between hypertension and smoking.
"I expect there also is a correlation between race and premium," Ommen said. "At the same time, that premium is not the causation of the disparities. ... There are inequities in our society, which we need to carefully look at and examine, but at the same time, I think we need to be giving attention to the science and address those disparities where the cause is, and don't try to find a way to turn the premium rate into a causation."
Birny Birnbaum, executive director of the Center for Economic Justice, wants to keep the tougher language and called Ommen's assertions "simply and factually incorrect."
"Commissioner Ommen's example assumes that somehow you can't address proxy discrimination or systemic racism and address hypertension at the same time," Birnbaum said. "It's important to understand that avoiding proxy discrimination, or minimizing proxy discrimination, doesn't mean you can't use particular variables. It means you can use those variables to the extent that they are predictive of a particular outcome, not to the extent that they are correlated with a protected class."
The need for principles governing use of AI is growing in importance as more and more big data becomes available on everything from purchase patterns to credit scores to neighborhood crime rates. With the pandemic limiting traditional face-to-face selling and medical exams, more insurers want to tap into AI to speed up accelerated underwriting.
Industry trade associations are supportive of the broader North Dakota language. The use of AI is an outgrowth of the industry’s commitment to bringing financial protection to more families, said Paul Graham, senior vice president for policy development for the American Council of Life Insurers.
Life insurers are ready to work with all stakeholders “to prevent underwriting decisions that would be otherwise prohibited by regulation or law,” he said. “We embrace our responsibility to create opportunities for more Americans."
“Life insurers use artificial intelligence in the underwriting process as a way to enhance the consumer buying experience. It is less invasive. It speeds up the process of issuing a policy, which is imperative today in our instant-delivery marketplace,” Graham said.
But Birnbaum challenged regulators to leave the language banning proxy discrimination alone.
"We have to be crystal clear," he said. "Opposing the current language means you oppose efforts to address systemic racism and insurance, because if you oppose this current language, then you cannot address systemic racism and insurance."
InsuranceNewsNet Senior Editor John Hilton has covered business and other beats in more than 20 years of daily journalism. John may be reached at [email protected]. Follow him on Twitter @INNJohnH.
© Entire contents copyright 2020 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.