Big Data Carries Big Potential For Insurance Discrimination, Regulators Say
There are numerous examples of how big data algorithms discriminate against communities of color. For example, a 2018 study by Consumer Reports and ProPublica found disparities in auto insurance prices between minority and white neighborhoods that could not be explained by risk alone.
ProPublica and Consumer Reports examined auto insurance premiums and payouts in California, Illinois, Texas and Missouri, and found that insurers were charging premiums that were on average 30% higher in ZIP codes where most residents are minorities than in whiter neighborhoods with similar accident costs.
Courts have consistently ruled that “redlining” based on race is illegal. Some consumer advocates say insurers might not even know of many cases of “unintentional” discrimination if a computer algorithm is producing it.
There are a number of laws, state and federal designed to rein in Big Data, Tsang noted. The most controversial is the Fair Credit Reporting Act, simply because the data it governs is controversial.
"Some companies consider credit score as a valuable input to explain mortality risk," Tsang explained, adding that critics say it inevitably leads to "the potential confusion of association with causation.
"For example, some ethnic groups on average have higher credit score than the other ethnic groups," he said. "Can it be construed as some ethnic groups are healthier by simply relying on the average credit score? This debate is likely to continue for quite some time."
Credit data should be used sparingly, Tsang said, or primarily as supplemental data. The industry is looking to state regulators to set the guardrails for data usage in the industry, he said.
In August, the NAIC adopted “guiding principles” for use of artificial intelligence based on the Organisation for Economic Co-operation and Development’s AI principles that have been adopted by 42 countries, including the United States.
After robust discussions, regulators added a principle encouraging industry participants to take proactive steps to avoid proxy discrimination against protected classes when using AI platforms.
InsuranceNewsNet Senior Editor John Hilton has covered business and other beats in more than 20 years of daily journalism. John may be reached at [email protected]. Follow him on Twitter @INNJohnH.
© Entire contents copyright 2021 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.