Citing concerns over "systemic biases" creeping into insurance, New York State regulators are taking the first steps toward regulating the use of artificial intelligence.
Superintendent of Financial Services Adrienne A. Harris issued for public comment a proposed circular letter addressing the use of AI by licensed insurers. In the Jan. 17 letter, regulators focus on underwriting and pricing, warning of the risk of “unfair adverse” effects stemming from the use of AI and external consumer data and information sources (ECDIS).
“Technological advances that allow for greater efficiency in underwriting and pricing should never come at the expense of consumer protection,” Harris said. “DFS has a responsibility to ensure that the use of AI in insurance will be conducted in a way that does not replicate or expand existing systemic biases that have historically led to unlawful or unfair discrimination.”
The purpose of the letter is to "identify DFS’s expectations that all insurers authorized to write insurance in New York State ... develop and manage their use of ECDIS, artificial intelligence systems, and other predictive models in underwriting and pricing insurance policies and annuity contracts," the letter states.
DFS is soliciting comments from the industry and the public on the proposed circular letter until March 17.
The Life Insurance Council of New York sent InsuranceNewsNet this statement Wednesday:
"LICONY members look forward to reviewing and providing input on the Department’s guidance regarding the use of artificial intelligence (A/I) in underwriting and pricing decisions. The life industry in New York supports the Department’s goal of ensuring that A/I is not used to produce outcomes that otherwise would be prohibited by law while also recognizing the efficiencies and consumer benefits that can be gained through the responsible use of A/I.”
Proxy discrimination concerns
Consumer advocates have concerns about AI and proxy discrimination, or the use of one or more search features to create a discriminatory profile against a protected class. For example, an algorithm might use someone's music interests as a proxy for race.
New York regulators share those concerns.
"ECDIS may reflect systemic biases and its use can reinforce and exacerbate inequality," the letter reads. "The self-learning behavior of AIS increases the risks of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes that may disproportionately affect vulnerable communities and individuals or otherwise undermine the insurance marketplace in New York."
The letter makes 46 points total. Here are a few that stand out:
1. DFS "may audit and examine an insurer’s use of ECDIS and AIS, including within the scope of regular or targeted examinations" for compliance with state law.
2. Insurers are "encouraged to use multiple statistical metrics in evaluating data and model outputs to ensure a comprehensive understanding and assessment." In addition to quantitative analysis, insurers should include a qualitative assessment of unfair or unlawful discrimination.
"This includes being able to explain, at all times, how the insurer’s AIS operates and to articulate the intuitive logical relationship between ECDIS and other model variables with an insured or potential insured individual’s risk," the letter states.
3. Policies and procedures should include training for relevant employees on the "responsible and lawful use" of ECDIS and AIS.
4. Insurers must "be prepared to respond to consumer complaints and inquiries about the use of AIS and ECDIS" by implementing procedures to receive and address such complaints. Insurers must maintain any records of complaints regarding AIS or ECDIS.
5. Insurers "retain responsibility for understanding any tools, EDCIS, or AIS used in underwriting and pricing" that were developed or deployed by third-party vendors, and should develop written procedures for third-party vendors and their use of those tools.
The National Association of Insurance Commissioners is surveying each segment of insurance to quantify exactly who is using AI, who wants to use it and what tasks they want it to do.
An initial survey of auto insurers found that 88% of insurers currently use, plan to use or plan to explore using artificial intelligence or machine learning as part of their everyday operations. Seventy percent of home insurers said they plan to do the same.
In August 2020, the NAIC adopted guiding principles on artificial intelligence after lengthy discussions. Regulators added language encouraging insurers to take proactive steps to avoid proxy discrimination against protected classes when using AI platforms.
Consumer advocates are frustrated that little action has taken place in the three-plus years since the NAIC laid out the principles. The NY draft bulletin is the guidance the NAIC "should have produced," said Birny Birnbaum, executive director of the Center for Economic Justice.
The NY draft bulletin "actually acknowledges structural racism and unintentional bias resulting from racially-biased outcomes baked into historical data – and directs insurers to test for it," he explained. "This is the logical implementation of insurer statements regarding systemic racism following the murder of George Floyd and the principles for artificial intelligence adopted by the NAIC."
InsuranceNewsNet Senior Editor John Hilton covered business and other beats in more than 20 years of daily journalism. John may be reached at [email protected]. Follow him on Twitter @INNJohnH.