Colorado Aims to Prevent AI-Driven Discrimination in Insurance [Government Technology]
Apr. 19—Colorado officials want to ensure that residents' social media habits, credit scores and other less traditional customer data cannot be used against them when shopping for insurance.
Officials fear that if insurers use such nontraditional data, or feed it into their algorithms and predictive models, they may come up with underwriting, price marketing and claims adjudication decisions that unintentionally — and unfairly — discriminate.
"The use of nontraditional factors in extremely complicated predictive models that carriers may be using could lead to unintended consequences, namely unfairly discriminatory practices regarding protected classes,"
A set of new rules aims to stop that.
A law passed in 2021 prohibits insurers — and any big data systems they use — from unfairly discriminating based on "race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression." To show compliance, insurers must show the
Now, two years later, the division is following up with more specific guidance. In February, it issued a draft version of its proposed Algorithm and Predictive Model Governance Regulation and has been gathering feedback.
"In terms of insurance, we're really the first state in the nation to build this far into it," said
The draft rule explains oversight and risk management requirements that insurers would need to comply with, including obligations around documentation, accountability and transparency. The division has been consulting with and gathering feedback from stakeholders in different insurance categories, and it will follow up with more specific guidance on how companies can test their algorithms for potential unfair discriminatory outcomes.
The division's draft rulemaking has focused first on life insurance, followed by private passenger auto insurance, with other insurance types to come.
HOW COMMON IS THIS PRACTICE?
Lapham said it's unclear how deeply nontraditional data is being used by insurers, but that officials want to learn about such practices and get ahead of them. The division surveyed 10 carriers about the kinds of governance models they have for "external consumer data and information sources" (ECDIS) as well as predictive models and algorithms. Survey results showed a range of maturation.
"Some carriers have fairly little to no governance around use of this information or around use of these AI tools. And then some are fairly well developed," Lapham said.
But officials have seen reasons to worry in general. Lapham noted that AI tools have led to discriminatory outcomes in areas like hiring and lending practices. Plymell, meanwhile, pointed to examples where auto insurers' considerations of nontraditional data resulted in pricing disparities based on factors unrelated to customers' driving practices. Those have included details about credit scores, educational attainment and job titles.
Life insurers can, for example, consider details like family history, occupational information, medical information, disability and behavioral information, provided there is a "direct relationship" to the customer's "mortality, morbidity or longevity risk." They can also consider financial details that reflects "insurable interest, suitability or eligibility for coverage," per the draft rules.
This new rule, though, looks at other things. Those include:
* Credit scores
* Social media habits
* Purchasing habits
* Homeownership
* Education
* Licensures
* Civil judgments
* Court records
* Occupation
Plymell said that with the current lack of rules, insurance carriers might also be considering other data when making decisions, including factors like voter information, motor vehicle records, biometric data, data from wearable devices, property or casualty data from affiliated carriers, third-party vendor risks scores and more.
"We recognize that insurance carriers have the ability to account for differences in risk," Lapham said. "But we also need to be sure that customers are being treated fairly, in that similarly situated people are not paying more for the coverage, are not being treated differently when it comes to claim practices and so forth."
In public commentary, the
The division's approach, meanwhile, is also informed by the NIST's AI Risk Management Framework, the
OVERSIGHT RULES
Insurers would have to take steps such as documenting their use of the algorithms and predictive models as well as establishing written policies and processes for the "design, development, testing, use, deployment and ongoing monitoring" of the data and tools. They'd need to include a variety of details, such as documenting "all decisions" made throughout the systems' life cycles, including who made the choices and why, and based on what evidence.
Accountability measures include making senior management and boards of directors responsible for their companies' "overall strategy" on ECDIS and predictive tools using such data. Companies would also need to clearly establish who's responsible for various stages of the tools' creation and use. Plus, they need to respond to consumers' questions and complaints with enough information to let them "take meaningful action" to an ECDIS-influenced decision that negatively affects them.
No preparations are perfect, and insurers would also need to plan for "unintended consequences."
Finally, insurers would have to periodically report to the division about their compliance with the rules and their use of the data and tools. Noncompliant insurers would face penalties, which could include sanctions, cease-and-desist orders, licenses suspensions or revocations and civil penalties, per the rule.
WIDER PICTURE: AI, BIG DATA AND ACCURACY
As various industries look to tap into big data and use decision-making algorithms, key questions emerge.
One of the risks of using algorithms fed by digital data gleaned from consumers is that the conclusions, and the information they're based on, may be inaccurate.
Issues could stem from the data itself. In a conversation with GovTech,
Accuracy issues also could emerge from how people might change their behaviors to 'game' algorithms. That might affect the tool's conclusions, without actually reflecting a real change in the consumer outcomes the tool is trying to assess, said
For example, credit score algorithms attempt to understand consumers' future abilities to meet financial commitments. They can't actually measure the future, of course, so developers design the tools to make predictions about consumers based on their current and past circumstances and behaviors. Consumers learning how the algorithms work might then try to change some of the measured behaviors in superficial ways, to get better scores without actually changing their own financial circumstances.
"They might save money differently, spend it differently, spend it at certain times of the month — basically, for lack of a better word, 'gaming' the algorithm, so that it artificially inflates their score, depending on what their own goals are," Gilbert told GovTech.
Algorithms in the insurance market might run into similar issues, where people keep changing the proxy behaviors measured by the system, once they're aware they're being monitored.
"It does affect people's behavior [in ways] that, over the long run, may very well end up defeating the purpose of those algorithms in the first place," Gilbert said.
This can also create other concerns, beyond accuracy ones: Should algorithms pull from the vast amount of data available about consumers online, consumers may feel compelled to shift much of how they conduct their lives to get artificially better scores.
"People really live online now," Gilbert said, and so insurance algorithms examining all available digital data about a person would be examining their "entire life," not just their activities related to the specific insurance in question.
"As a result of that, we're sort of A/B testing on people's lives in ways that, I think, causes a lot of anxiety," he said.
SAFEGUARDS: TRANSPARENCY AND VETTING
Obligating companies to be transparent and detailed not only about how and when they collect online data but also how they use it to impact consumers would be one important safeguard, Gilbert said.
King also said it's important to understand how companies treat the absence of certain data — such as whether consumers would be dinged if they lack social media accounts: "What are the insurance companies trying to collect and weigh, but then, I would say, on the other hand, how are you potentially penalized if you don't have this data readily available about you?"
To ensure proper oversight, AI systems and their data use needs to be explainable to government and academics — not just professional auditors in the private sector, King said. She noted that such goals often conflict with companies' desires to keep algorithms proprietary and raise questions about what kind of expertise is needed to conduct the audits.
Thoroughly vetting algorithms for fairness is challenging: "It's a major unanswered question," Gilbert said.
Examiners might need comparison information to be able to tell how — or whether — different algorithm design choices caused those outcomes. That could mean comparing decisions about customers that were made with the algorithm's help against decisions made without using the tool, Gilbert said.
Plus, examiners cannot just focus on individual decisions to understand the algorithm's impacts. Instead, they should examine the tools' performance over a long period of time, to check for patterns causing disparate impacts.
Vetting algorithms isn't just a technical challenge — the topic also raises moral questions, because policymakers must decide how to define what counts as a "fair" algorithm.
There are two common ways to look at this, Gilbert said. Procedural fairness involves seeing if an algorithm's predictions prove accurate or cause disparate harms. Substantive fairness would involve deciding if the ways the tools are used — and the kinds of responses their outcomes trigger — match with a community's ideals about how society should function and treat people.
For example, a policing algorithm intended to identify crime hot spots would be procedurally fair if following its recommendations about where to direct police resulted in a measurable drop in crime rates.
Considering substantive fairness, meanwhile, would require questioning whether it's desirable to send more police to the areas identified by the algorithm, especially if doing so results in some communities being more heavily patrolled than others, Gilbert said. Substantive fairness considerations would mean questioning whether that kind of policing disparity is acceptable or if responding differently to the algorithms' outputs would better suit goals for how the city should work.
Debates over AI provide a new opportunity to examine such questions, and precisely define how we want society to look, he said.
"What gives me hope, is that we are now able to build systems of such scale and sophistication that we are able to ask questions in a very rigorously defined way about what is the good life ... [or] what is a good housing market?," Gilbert said. "These are questions that were not nearly as practical 30 years ago as they are now. Because we now have tools that can enable us to precisely specify how we might want those dynamics to work, how we might want those parts of our lives, those domains to operate ... . what's happening with AI is, it's really an opportunity for us to deliberate much more deeply and seriously and collectively, about what our values are, in ways we previously were not able to."
___
(c)2023 Government Technology
Visit Government Technology at www.govtech.com
Distributed by Tribune Content Agency, LLC.
New Accounting Standards Forum Presentation Material
Pet Dog Insurance Market to See Massive Growth by 2029 | Embrace, Agria, Trupanion
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News