Big Data Carries Big Potential For Insurance Discrimination, Regulators Say
The growth of analytics, artificial intelligence and Big Data capabilities created a monster of sorts with many legal and ethical ramifications in the insurance world.
Regulators and lawmakers have been busy trying to stay one step ahead of potential trouble with the technology. LIMRA hosted a panel discussion on these topics Tuesday titled "Regulation of Big Data and Predictive Models" at the virtual Life Insurance Conference from LIMRA, LOMA and LL Global.
Vincent Tsang is an actuary with the Illinois Department of Insurance. A member of the National Association of Insurance Commissioners, Tsang explained what is concerning to its working groups.
"Regulators are concerned with ... adverse impact on availability and affordability of insurance for protected classes, including race, color, creed, national origin, standard of domestic violence victims, past lawful travel or sexual orientation in any manner, as well as other protected classes," Tsang said.
More insurers are relying on big data to guide underwriting and set what is expected to be impartial, by-the-numbers pricing. But critics say it is anything but. The capabilities are only going to increase, Tsang said, as insurers seek to speed up accelerated underwriting (AUW) capabilities.
There are numerous examples of how big data algorithms discriminate against communities of color. For example, a 2018 study by Consumer Reports and ProPublica found disparities in auto insurance prices between minority and white neighborhoods that could not be explained by risk alone.
ProPublica and Consumer Reports examined auto insurance premiums and payouts in California, Illinois, Texas and Missouri, and found that insurers were charging premiums that were on average 30% higher in ZIP codes where most residents are minorities than in whiter neighborhoods with similar accident costs.
Courts have consistently ruled that “redlining” based on race is illegal. Some consumer advocates say insurers might not even know of many cases of “unintentional” discrimination if a computer algorithm is producing it.
There are a number of laws, state and federal designed to rein in Big Data, Tsang noted. The most controversial is the Fair Credit Reporting Act, simply because the data it governs is controversial.
"Some companies consider credit score as a valuable input to explain mortality risk," Tsang explained, adding that critics say it inevitably leads to "the potential confusion of association with causation.
"For example, some ethnic groups on average have higher credit score than the other ethnic groups," he said. "Can it be construed as some ethnic groups are healthier by simply relying on the average credit score? This debate is likely to continue for quite some time."
Credit data should be used sparingly, Tsang said, or primarily as supplemental data. The industry is looking to state regulators to set the guardrails for data usage in the industry, he said.
In August, the NAIC adopted “guiding principles” for use of artificial intelligence based on the Organisation for Economic Co-operation and Development’s AI principles that have been adopted by 42 countries, including the United States.
After robust discussions, regulators added a principle encouraging industry participants to take proactive steps to avoid proxy discrimination against protected classes when using AI platforms.
InsuranceNewsNet Senior Editor John Hilton has covered business and other beats in more than 20 years of daily journalism. John may be reached at [email protected]. Follow him on Twitter @INNJohnH.
© Entire contents copyright 2021 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.