Consumer Reports Supports New York Department of Financial Services' Efforts to Protect Consumers From Algorithmic Discrimination in Insurance
Consumer Reports applauded the
"Financial institutions rely on artificial intelligence and machine learning for everything from marketing, pricing and underwriting to claims management, fraud monitoring, and customer service," said
Chien continued, "As insurers continue to use AI to make decisions about consumers, we need to make sure these systems are designed to minimize bias so that everyone is treated fairly. We commend the Department's proactive efforts to ensure transparency, accountability and fairness in insurance underwriting and pricing and encourage it to take some additional steps to protect consumers."
The risk of algorithmic discrimination is well established and can occur when an automated decision-system repeatedly creates unfair or inaccurate outcomes for a particular group. While the risk of discrimination exists with traditional models, these risks are exacerbated by machine learning techniques for automated decision-making that rely on the processing of vast amounts of data using often opaque models.
Biased results can arise from a number of sources, including underlying data and model design. Unrepresentative, incorrect, or incomplete training data as well as biased data collection methods can lead to poor outcomes in algorithmic decision-making for certain groups. Data may be tainted by past discriminatory practices. Biases can also be embedded into models through the design process, such as through the improper use of protected characteristics directly or through proxies.
CR's letter points out a number of real-world examples to illustrate these risks. A fraud monitoring algorithm may systematically flag consumers on the basis of race or proxies for race, as illustrated in the recent lawsuit (https://www.nytimes.com/2022/12/14/business/state-farm-racial-bias-lawsuit.html) against
The Department's proposed circular makes clear that insurers should not use external consumer data and information sources and artificial intelligence systems unless they can establish through a comprehensive assessment that their underwriting or pricing guidelines are not unfairly or unlawfully discriminatory. If the insurer's comprehensive assessment finds a disproportionate adverse effect, it must seek out a "less discriminatory alternative" variable or methodology that reasonably meets its legitimate business needs.
In its letter to the Department, CR recommended that it require insurers to proactively search for and implement less discriminatory alternatives on an integrated, ongoing basis rather than as part of a one-off assessment after an AI model is developed. CR called on the Department to provide further guidance to insurers on how to conduct such a search and to select among alternatives to reduce disparate impact as much as possible.
CR highlighted its support for the Department's proposed requirement that insurers be more transparent with consumers about its reasoning and the data it relied upon when policies are refused or canceled. CR urged the Department to ensure that such notices be provided in plain language with specific steps consumers can take to achieve a better result and recommended that consumers be given the right to request human review of automated decisions.
The Department's draft circular currently applies to AI used for insurance pricing and underwriting and explicitly excludes algorithms and machine learning used for marketing, claims settlement, and fraud monitoring. CR urged the Department to take further action to address algorithmic discrimination that may arise across all stages of the insurance lifecycle.
* * *
Texas U.S. Attorney: Former Energy Company President Convicted in $5.5M Illegal Kickback Conspiracy and Commodities Insider Trading Scheme
Million-acre Texas wildfires ignited by power lines
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News