Senate Banking Subcommittee Issues Testimony From National Fair Housing Alliance President Rice (Part 1 of 2) - Insurance News | InsuranceNewsNet

InsuranceNewsNet — Your Industry. One Source.™

Sign in
  • Subscribe
  • About
  • Advertise
  • Contact
Home Now reading Newswires
Topics
    • Advisor News
    • Annuity Index
    • Annuity News
    • Companies
    • Earnings
    • Fiduciary
    • From the Field: Expert Insights
    • Health/Employee Benefits
    • Insurance & Financial Fraud
    • INN Magazine
    • Insiders Only
    • Life Insurance News
    • Newswires
    • Property and Casualty
    • Regulation News
    • Sponsored Articles
    • Washington Wire
    • Videos
    • ———
    • About
    • Meet our Editorial Staff
    • Advertise
    • Contact
    • Newsletters
  • Exclusives
  • NewsWires
  • Magazine
  • Newsletters
Sign in or register to be an INNsider.
  • AdvisorNews
  • Annuity News
  • Companies
  • Earnings
  • Fiduciary
  • Health/Employee Benefits
  • Insurance & Financial Fraud
  • INN Exclusives
  • INN Magazine
  • Insurtech
  • Life Insurance News
  • Newswires
  • Property and Casualty
  • Regulation News
  • Sponsored Articles
  • Video
  • Washington Wire
  • Life Insurance
  • Annuities
  • Advisor
  • Health/Benefits
  • Property & Casualty
  • Insurtech
  • About
  • Advertise
  • Contact
  • Editorial Staff

Get Social

  • Facebook
  • X
  • LinkedIn
Newswires
Newswires RSS Get our newsletter
Order Prints
February 24, 2024 Newswires
Share
Share
Post
Email

Senate Banking Subcommittee Issues Testimony From National Fair Housing Alliance President Rice (Part 1 of 2)

Targeted News Service

WASHINGTON, Feb. 23 -- The Senate Banking, Housing and Urban Affairs Subcommittee on Housing, Transportation and Community Development issued the following testimony by Lisa Rice, president and CEO of the National Fair Housing Alliance, involving a hearing on Jan. 31, 2024, entitled "Artificial Intelligence and Housing: Exploring Promise and Peril":

* * *

Chairwoman Smith, Ranking Member Lummis, and other distinguished members of the Senate Subcommittee on Housing, Transportation, and Community Development, thank you for the opportunity to testify during the Subcommittee's hearing on Artificial Intelligence and Housing: Exploring Promise and Peril. Artificial Intelligence holds great promise for improving systems, democratizing opportunities, lowering costs, and increasing productivity. Yet, it also holds great dangers for perpetuating bias, spreading mis-information, excluding people from necessary services, and generating other harms. It is critical that Congress understand these dichotomies and establishes sound rules and guardrails that can help ensure the U.S. remains the world leader in innovation and technological advancement and that the nation protects its residents against the perils AI can present. NFHA welcomes the Subcommittee's commitment to advancing on both these fronts.

The National Fair Housing Alliance(R) (NFHA(TM)) is the country's only national civil rights organization dedicated solely to eliminating all forms of housing and lending discrimination and ensuring equal opportunities for all people. As the trade association for over 170 fair housing and justice-centered organizations throughout the U.S. and its territories, NFHA works to dismantle longstanding barriers to equity and build resilient, inclusive, well-resourced communities where everyone can thrive.

* * *

NFHA's Evolution in Addressing Algorithmic and AI Bias & Description of NFHA's Responsible AI Program

NFHA has addressed harms associated with AI and automated systems since its inception in 1988. We first concentrated our efforts on prohibiting or restricting the use of discriminatory automated systems such as credit and insurance scoring, underwriting, and pricing models, in housing and financial services. Early settlements with entities like Prudential, State Farm, Nationwide, and Allstate addressed these discriminatory systems. Several years ago, while litigating a major case against then-Facebook, it became even more clear that technology, including AI, was the new civil and human rights frontier and, as a civil rights organization, we had to be a leader in this sector. Thus, we established our Responsible AI division with an initial focus on Tech Equity. The division is comprised of researchers and engineers committed to civil and human rights principles and is headed by one of the world's premier AI Research Scientists, Dr. Michael Akinwumi. NFHA's Responsible AI division has five workstreams founded on each of the following technical and policy research pillars:

* Tech Equity: We focus on developing and advocating for methodologies that ensure automated systems offer equitable access to housing opportunities.

* Data Privacy: We strive to test and promote technologies that balance consumer privacy with the need for data access to eliminate bias in automated systems.

* Explainability: We advocate for consumers' right to explanations for automated decisions and work to test and promote methodologies that clarify the reasoning or design behind automated systems.

* Reliability: We focus on testing and advancing techniques to ensure only safe and valid automated systems are used in housing applications.

* Human-Centered Alternative Systems: We work on advancing technical and policy solutions to determine when human-centered alternatives should take precedence over automated systems in housing decisions, particularly when data quality is poor, infrastructure is inadequate, or there is a lack of social awareness about harms of automated systems.

Since launching our Responsible AI work, NFHA has contributed to, advocated for, and created technical and policy solutions that advance responsible use of technologies in housing, risk management frameworks, and developed a state-of-the-art framework for auditing algorithmic systems,1 and other policies. including the White House's AI Bill of Rights2 and White House Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.3 The Genesis of Artificial Intelligence

The term "artificial intelligence" was first used by Professor John McCarthy, a mathematics professor at Dartmouth, who convened a summer research workshop held at Dartmouth College in 1956.4 Professor McCarthy and three other colleagues penned "A Proposal For The Dartmouth Summer Research Project On Artificial Intelligence," that stated the purpose of the convening was to "proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."5 Just two years after this pivotal conference, Fair, Isaac and Company developed its Credit Application Scoring Algorithm, a rule-based decision management system that simulated human functions and intelligence.6 Since then, developers have created many different complex rule-based, statistical, computational models [including expert systems as an attempt to abstract and replicate human intelligence with the objectives] to streamline and standardize decisioning, improve efficiencies, and save costs for a host of transactions and activities in the housing and financial services sector.

* * *

1 Michael Akinwumi, Lisa Rice, and Snigdha Sharma, Purpose Process, and Monitoring: A New Framework for Auditing Algorithmic Bias in Housing and Lending, National Fair Housing Alliance (2022), https://nationalfairhousing.org/wp-content/uploads/2022/02/PPM_Framework_02_17_2022.pdf.

2 See White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (Oct. 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

3 See White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

4 See Dartmouth College, Artificial Intelligence Coined at Dartmouth, https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth.

5 J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, Dartmouth College (Aug. 31, 1955), http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.

6 See Fair, Isaac and Company Incorporated, FICO Investor Relations, Annual Report (Dec. 1998), https://investors.fico.com/static-files/461a5d42-97be-45d8-a837-38cc25a0c267.

* * *

The Definition of AI

AI focuses on creating machines capable of intelligent behavior. It involves the computational understanding and creation of artifacts that exhibit intelligent behavior.7 The scope of AI is broad, encompassing various aspects such as machine intelligence, cognitive functions, or intelligent agents. It also includes systems that mimic human behavior. To guarantee existing automated systems in the housing and lending sectors are not overlooked in the pursuit of implementing AI that is secure, reliable, and free from discrimination, NFHA characterizes AI as a computerized mechanism capable of performing one or more of the following functions:

* Discerning patterns in data;

* Conducting exploratory, predictive, prescriptive, or diagnostic analyses based on data, logical reasoning, or established rules; or

* Generating patterns utilizing data, logic, or rules.

NFHA views a system as an amalgamation of algorithm, model tech infrastructure, and human elements. Fundamentally, for NFHA, AI represents a sociotechnical system, integrating technical capabilities with societal and human components. This practical definition is consistent with how federal agencies characterized AI in a 2023 joint statement on enforcement efforts against algorithmic bias and discrimination.8

* * *

Use of AI in Housing and Financial Services and Potential Challenges

Tenant Screening Selection Systems

To evaluate applicants, landlords sometimes purchase tenant screening services and tools from companies that offer products that "adjudicate" or "score" a rental applicant based on AI or other automated tools. Tenant screening selection systems have been a focal point of federal regulators and housing advocates for several years. The Consumer Financial Protection Bureau's (CFPB) November 2022 Tenant Background Checks Market Report and Consumer Snapshot: Tenant Background Checks effectively laid out the landscape of the tenant screening market.9 In May 2023, NFHA submitted extensive comments in response to the Federal Trade Commission's (FTC) and CFPB's Request for Information on Tenant Screening.10 In those comments, NFHA cited research supporting its assertion that tenant screening practices that screen for criminal history, credit history, and eviction history have a clear and outsized negative impact on historically underserved populations, including people of color, immigrants, public housing voucher recipients, and renters with disabilities. Additionally, we noted that landlords are increasingly relying on algorithmic models that may perpetuate bias and make it harder for consumers to understand why they are being denied a housing opportunity.11

* * *

7 S. Shapiro, Artificial intelligence (AI) (Jan. 2003), https://dl.acm.org/doi/abs/10.5555/1074100.1074138.

8 Consumer Financial Protection Bureau, U.S. Department of Justice, Equal Employment Opportunity Commission, and Federal Trade Commission, Joint Statement on Enforcement Efforts against Discrimination and Bias in Automated Systems (2023) https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems.

9 CFPB, Tenant Background Checks Market Report and Consumer Snapshot: Tenant Background Checks (Nov. 2022), https://files.consumerfinance.gov/f/documents/cfpb_tenant-background-checks-market_report_2022-11.pdf.

10 NFHA, Comment to the FTC regarding the Request for Information on Tenant Screening (May 30, 2023), https://www.regulations.gov/comment/FTC-2023-0024-0585.

* * *

As it relates the tenant screening systems, fair housing concerns are magnified because of potentially erroneous data that can be used to develop the models and because of the lack of transparency regarding tenant screening algorithms' predictiveness, design, development, testing process, and data inputs. There are also significant concerns about the quality of risk management efforts pertaining to these systems. Concerns about these systems must be addressed as they will likely impact, in one form or another, nearly all of the 100 million+ renters in the U.S. The tenant screening industry receives billions of dollars in revenue each year despite being plagued by numerous complaints of privacy violations and bias related to these systems.

* * *

Dynamic Rental Pricing Systems

In the rental housing market, dynamic pricing systems based on AI have become a widespread feature. Rent prices can change weekly, daily, or more frequently, based on any number of factors that are hidden from consumers making it profoundly difficult for prospective tenants to understand why they are being charged a certain rate. Generally, the factors that go into establishing rent pricing are kept from public view. However, ProPublica has reported that the company that produces the leading rental pricing software (RealPage) uses its clients' leasing data in pricing formation, which is believed to effectively cause rents to increase.12 Following this revelation, the U.S. Government Accountability Office reported that rental rates had increased 24% in the last three years13 and the U.S. Department of Justice (DOJ) filed a Statement of Interest in the anti-trust litigation against RealPage.14 Moreover, dynamic rental pricing powered by AI can be a barrier to housing for rental voucher recipients as rent prices potentially fluctuate above HUD fair market rent amounts. This can negatively impact low wealth groups who are disproportionately single, female-headed families with children and people with disabilities15 as well as people who live in rural communities.16 We need to fully understand the role dynamic pricing systems play in impacting rental housing affordability particularly when it impacts members of protected classes under the Fair Housing Act.

* * *

11 Valerie Schneider, Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice, 52 Colum. Hum. Rts. L. Rev. 251, 279 (2020), https://hrlr.law.columbia.edu/hrlr/locked-out-by-big-data-how-big-data-algorithms-and-machine-learning-may-undermine-housing-justice/.

12 Heather Vogell, Rent Going Up? One Company's Algorithm Could Be Why, ProPublica (Oct. 15, 2022), https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent.

13 U.S. Government Accountability Office, The Affordable Housing Crisis Grows While Efforts to Increase Supply Fall Short (Oct. 2023), https://www.gao.gov/blog/affordable-housing-crisis-grows-while-efforts-increase-supply-fall-short.

14 In re Real Page: Rental Software Antitrust Litigation (II), Case No. 3:32-MD-3071, DOJ Statement of Interest (Nov. 15, 2023), https://www.justice.gov/d9/2023-11/418053.pdf.

15 See Claudia D. Solari et al., Housing Insecurity in the District of Columbia, Urban Institute (Nov. 16, 2023), https://www.urban.org/research/publication/housing-insecurity-district-columbia; Sharon Cornelissen, The Pandemic Aggravated Racial Inequalities in Housing Insecurities: What Can It Teach Us about Housing Amidst Crisis?, Harvard Joint Center for Housing Studies (July 12, 2023), https://www.jchs.harvard.edu/blog/pandemic-aggravated-racial-inequalities-housing-insecurity-what-can-it-teach-us-about-housing.

16 See Irina Ivanova, Inflation is Hurting Rural Americans More Than City Folks - Here's Why, CBS MoneyWatch (Dec. 2, 2021), https://www.cbsnews.com/news/inflation-rural-households-non-college-grads-hardest/.

* * *

Credit Scoring Systems

Credit scoring systems are algorithmic models that attempt to predict a borrower's risk and how well that person is likely to repay their debt obligations. These systems typically generate a numerical score used to help players in the financial services system determine the creditworthiness of a consumer. They are often used as part of a lender's decisions on underwriting and pricing.

Concerns about the potential for credit scoring models to perpetuate bias against certain groups have been raised over the decades. For the most part, the data used to build credit scoring models is generated from data housed at the three main credit reporting agencies (CRAs): Equifax, Transunion, and Experian. Yet the information reported to the CRAs, particularly historical data, can often reflect bias within the credit and housing markets. In other words, when a person is discriminated against in their efforts to access the credit markets, the negative information generated from those discriminatory transactions will be reflected in the data reported to the CRAs and that tainted data can harm consumers. As Federal Reserve Vice Chair of Supervision Michael Barr stated, "Artificial Intelligence...relies on the data that is out there in the world and the data...is flawed. Some of it is just wrong. Some of it is deeply biased...Information we have on the Internet is imperfect...if you train a Machine Learning device, if you train a Large Language Model on imperfect data, you're going to get imperfect results."17 Moreover, because of the U.S. dual credit market,18 mainstream lenders and banks are hyper-concentrated in predominately White communities. Conversely, non-traditional lenders like subprime lenders, payday lenders and check cashers, are concentrated in communities of color. This dynamic is not a function of economics. In fact, banks are closing their branches at a higher rate in affluent, high-income Black neighborhoods than they are in low-income non-Black communities.19 This means borrowers of color, who disproportionately access credit with non-traditional lenders that often do not report positive financial data to the CRAs, will be negatively impacted because the information that demonstrates their ability to repay their financial obligations is not included in the dataset used to build credit scoring models. It also means that that people of color are disproportionately credit invisible or unscoreable. Roughly one-third of Black and Latino people do not have credit scores because they disproportionately access credit outside of the financial mainstream.20

* * *

17 See Federal Reserve Board of Governors Vice Chair Michael Barr, Setting the Foundation for Effective Governance and Oversight: A Conversation with U.S. Regulators, Responsible AI Symposium (January 19, 2024), https://www.youtube.com/watch?v=HbM_zD0esDo

18 For more information about the U.S. dual credit market, see the National Fair Housing Alliance's webpage on Access to Credit, available at https://nationalfairhousing.org/issue/access-to-credit/.

19 Zach Fox, Zain Tariq, Liz Thomas, Ciaralou Palicpic, Bank Branch Closures Take Greatest Toll on Majority-Black Areas, Standard and Poor Global (July 25, 2019), https://www.spglobal.com/marketintelligence/en/news-insights/latest-news-headlines/bank-branch-closures-take-greatest-toll-on-majority-black-areas-52872925#:~:text=Since%202010%2C%20the%20branch%20footprint,disparity%20in%20net%20closure%20rates.

20 CFPB, Who Are The Credit Invisibles?, (Dec. 2016), https://files.consumerfinance.gov/f/documents/201612_cfpb_credit_invisible_policy_report.pdf.

* * *

According to the Federal Reserve, credit scoring models often include the following considerations:

* Number and type of credit accounts

* Timely payment of bills

* Amount/percentage of available credit

* Collection actions

* Amount of outstanding debt

* Age of accounts

Each of the above components present systemic challenges for underserved consumers like those living in urban and rural areas as those communities can often be credit deserts.21 Additionally, consumers impacted by historical discrimination are negatively impacted by these factors because they have less access to mainstream credit. For example, people of color disproportionately were steered to predatory subprime loans that carried abusive terms and conditions. As a result, these consumers disproportionately experienced collection actions not because these consumers were not creditworthy, but rather, because these consumers were targeted by discriminatory practices.22 There are racial barriers to economic opportunities and these barriers are more prominent for people of color, particularly Black people, Native Americans, and Latinos.23 These barriers are further magnified by the U.S.'s discriminatory and broken credit scoring system24 which is based on noisy data that underrepresents people of color.25

* * *

Insurance Scoring, Underwriting, Rating, and Claims Systems

Insurance scoring systems are algorithmic models built using actuarial data designed to predict the likelihood of a consumer experiencing a risk-related event such as filing a homeowners insurance claim or filing an auto claim. Insurance scoring systems can be built using data from the CRAs; however, model developers also rely on proprietary company data and information purchased from data providers that reflect consumers' claims experiences, weather-related data, and other information. Unlike credit scoring systems, insurance scoring models are not typically designed to determine whether a consumer will pay their insurance premium, but rather, the consumer's risk profile related to an event that would present a monetary exposure for the insurance company. The score can be used to help underwrite business and/or determine the price a consumer should pay for insurance.

* * *

21 Trulia and NFHA, 50 Years After the Fair Housing Act - Inequality Lingers (April 19, 2018), https://www.trulia.com/research/50-years-fair-housing/.

22 Lisa Rice and Deidre Swesnik, Discriminatory Effects of Credit Scoring on Communities of Color, Suffolk University Press (June 2012), https://nationalfairhousing.org/wp-content/uploads/2021/12/NFHA-credit-scoring-paper-for-Suffolk-NCLC-symposium-submitted-to-Suffolk-Law.pdf.

23 Will Douglas Heaven, Bias Isn't the Only Problem with Credit Scores--and No, AI Can't Help (June 17, 2021), https://www.technologyreview.com/2021/06/17/1026519/racial-bias-noisy-data-credit-scores-mortgage-loans-fairness-machine-learning/?truid=&utm_source=weekend_reads&utm_medium=email&utm_campaign=weekend_reads.unpaid.engagement&utm_content=06.26.21.subs&mc_cid=aad1663503&mc_eid=eead1c58a0

24 Megan Leonhardt, Democrats and Republicans in Congress Agree: The System That Determines Credit Scores Is "Broken" (Feb. 27, 2019) https://www.cnbc.com/2019/02/27/american-consumer-credit-rating-system-is-broken.html.

25 Laura Blattner and Scott Nelson, How Costly Is Noise? Data and Disparities in Consumer Credit (May 5, 2021) https://arxiv.org/pdf/2105.07554.pdf.

* * *

Insurance scoring systems, like other AI systems can perpetuate bias in myriad ways.26 NFHA has challenged discriminatory provisions in insurance scoring systems, most specifically in a matter brought against the Prudential Insurance Company.27 NFHA's investigation, research, and expert witness analysis revealed the company's insurance scoring system presented a disproportionate discriminatory impact on Black consumers in the price they paid for homeowners insurance. The differences in premiums between Black and White insureds was not explainable, based on NFHA's analysis, by appropriate risk factors. Instead, we alleged the model the company utilized was contributing to disparate outcomes. Moreover, our expert was able to devise a way to yield a less discriminatory outcome than the one generated by the model used by Prudential.

* * *

Additionally, the Casualty Actuarial Society (CAS) acknowledged that algorithmic bias can manifest in systems used in the insurance sector including underwriting, pricing, and claims models.28 The CAS issued reports highlighting examples of discrimination in the insurance market over the decades and various analyses of whether certain variables used to assess risk or affix insurance rates presented unfair discrimination. Factors such as zip code, address, educational level, credit scores, and occupation have raised serious concerns about their propensity for manifesting discrimination. CAS' research pointed out that "fully or semi-automated systems...are...inherently capable of introducing unfairness into the process and thus have direct consequences to individuals affected by these models. Bias is all around us, and it can creep into the decision-making paradigm in subtle ways, whether it is the subjectivity of human judgement, prejudice, historical inequities baked into the data, or faulty algorithms."29 Automated Underwriting Systems and Risk-Based Pricing Systems

Automated underwriting systems and risk-based pricing systems manifest and perpetuate bias as well. These systems rely on and are built using data contained in the CRAs. The data captured by CRAs is under-representative as it is missing critical information, like rental housing payment data, that can accurately reflect a borrower's willingness and ability to pay financial obligations. Data captured by the CRAs also includes information tainted by bias against underserved groups. Unfortunately, redlining and housing discrimination are still everyday occurrences30 and when consumers experience discrimination, that bias is reflected in the data captured by the CRAs.

* * *

26 AI systems can manifest discrimination by using biased or non-representative data sets, and by other means. For a detailed discussion about the many ways AI can perpetuate discrimination, see Testimony of Lisa Rice, Hearing on Equitable Algorithms: How Human-Centered AI Can Address Systemic Racism and Racial Justice in Housing and Financial Services before the U.S. House Financial Services Task Force on Artificial Intelligence (May 7, 2021), https://nationalfairhousing.org/wp-content/uploads/2022/01/Lisa-Rice-House-Testimony-on-AI-5-7-21.pdf

27 See National Fair Housing Alli. v. Prudential Ins. Co., 208 F. Supp. 2d 46 (D.D.C. 2002), https://casetext.com/case/national-fair-housing-alli-v-prudential-ins-co 28 Ronda Lee, AI Can Perpetuate Racial Bias in Insurance Underwriting, Yahoo!Money (Nov. 1, 2022).

29 Roosevelt Mosley, FCAS, and Radost Wenman, FCAS, CAS Research Paper Series on Race and Insurance Pricing: Methods for Quantifying Discriminatory Effects on Protected Classes in Insurance, Casualty Actuarial Society (2022), https://www.casact.org/sites/default/files/2022-03/Research-Paper_Methods-for-Quantifying-Discriminatory-Effects.pdf?utm_source=Landing&utm_medium=Website&utm_campaign=RIP+Series.

* * *

Data is not blind, nor is it harmless. It can be dangerous and toxic particularly when it manifests the discrimination inherent in our systems. For example, researchers at University of California-Berkeley found that fintech lenders that rely on algorithms to generate decisions on loan pricing discriminate against borrowers of color because their systems "have not removed discrimination but may have shifted the mode."31 The study revealed that Black and Latino borrowers were overcharged by $765 million per year. That is, Black and Latino borrowers were disproportionately charged a rate that is higher than their commensurate level of risk because of biased risk-based pricing systems.

* * *

Automated Valuation Models

The use of Automated Valuation Models (AVMs) for some portion of the home valuation process has been proliferating, especially as a potential remedy to multiple complaints of consumers of color having to "whitewash" their home and remove all indications of their race and ethnicity to receive a fair value.32 By statute, an AVM is defined as a "computerized model used by mortgage originators and secondary market issuers to determine the collateral worth of a mortgage secured by a consumer's principal dwelling."33 An AVM may use AI or some other automated system to serve various purposes. In some cases, a secondary market issuer or lender may use the AVM in place of a human appraiser; in some cases, the human appraiser may develop the opinion of value using an AVM; and in still other cases, a secondary market issuer or lender may use an AVM as a check on the human appraiser. The AVM may be faster, cheaper, more consistent, and less prone to bias, but AVMs pose at least two problems.

First, the AVM still relies on the traditional sales comparison approach in which the home valuation is based on recent sales of comparable homes in a comparable neighborhood. Because of America's long history of redlining and segregation, the AVM is likely using historical data that undervalues homes in formerly redlined areas.34 For example, research has shown that in 2021, homes in White neighborhoods were appraised at values nearly 250% higher than similar homes in similar Black neighborhoods and nearly 278% higher than similar homes in similar Latino neighborhoods.35 These disparities then generate data points used by the AVM, which can perpetuate the disparity in future transactions.

* * *

30 See, Recent Accomplishments of the Housing and Civil Enforcement Section, U.S. Department of Justice (Oct. 5, 2023), https://www.justice.gov/crt/recent-accomplishments-housing-and-civil-enforcement-section; What Modern-Day Housing Discrimination Looks Like: A Conversation with the National Fair Housing Alliance (Feb. 4, 2019), https://www.zillow.com/research/modern-housing-discrimination-22898/; Fair Housing Solutions: Overcoming Real Estate Sales Discrimination, National Fair Housing Alliance (Dec. 2019).

31 Robert P. Bartlett, Adair Morse Richard H. Stanton, and Nancy E. Wallace, Consumer Lending Discrimination in the FinTech Era, UC Berkeley Public Law Research Paper (Sept. 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3063448.

32 See, e.g., Julian Glover and Mark Nichols, Our America: Lowballed, ABC (2022), https://abc7news.com/feature/our-america-lowball-home-appraisal-racial-bias-discrimination/12325606/.

33 12 U.S.C. Sec. 3354(d).

34 See, NFHA and National Consumer Law Center, Comment to FHFA, CFPB, Federal Reserve, FDIC, OCC, and NCUA regarding Quality Control Standards for Automated Valuation Models (Aug. 21, 2023), https://www.fdic.gov/resources/regulations/federal-register-publications/2023/2023-quality-control-standards-for-automated-valuation-models-3064-ae68-c-010.pdf.

* * *

Second, using AVMs may create a bifurcated valuation system. AVMs tend to work best in neighborhoods with similar homes that generate multiple data points, which means the models excel in suburban subdivisions, which tend to be majority-White. AVMs do not work as well in neighborhoods of color, which tend to have older properties with varied repairs and upgrades. Also, consumers of color tend to stay in their homes longer, which means fewer sales data points for the AVM. Fannie Mae and Freddie Mac have begun accepting the AVM value and waiving the traditional appraisal in certain situations.36 The risk is that a bifurcated valuation system will develop in which consumers of color are more likely to be burdened with the cost of a traditional appraisal, which in turn tends to undervalue their home.

* * *

Marketing Systems and Digital Redlining

The use of AI and other algorithms have changed the face of the billion-dollar U.S. advertising industry from a market where ad content was posted predominantly in general media as compared to the hyper-targeted, individualized ad placement that we currently navigate in the new tech frontier. This transition has brought new challenges for bias in marketing practices, ad delivery, and services.

Fair housing groups challenged Facebook's digital advertising systems, operated by algorithms, that manifest discrimination against protected groups. As a result of the litigation, Facebook changed its ad-platform to restrict the ways that housing, employment, and credit ads can be targeted to potential consumers.37 Then, the U.S. Department of Justice entered into a consent decree with Meta (formerly Facebook), whereby the digital ad platform agreement to implement a variance reduction auditing system intended to reduce racial bias in the delivery of ad campaigns to Meta-users.38 Algorithms may also impact how consumers can access housing products and services. Recently, NFHA pursued a federal lawsuit and entered into a settlement with Redfin Corporation for alleged "digital redlining," which is essentially the discriminatory offering of their services in non-White areas.39 In NFHA's investigation, fair housing groups found that Redfin offered "No Service" for homes in non-White areas at a greater rate than for homes in White areas. Under a settlement agreement, Redfin agreed to alter its minimum home price policy to be more inclusive in its servicing.40 As AI continues to shape changes in the advertising markets and in how consumers broadly access housing and lending services, it is imperative that players across the tech field consider the potential for discriminatory outcomes of their systems and implement ongoing monitoring audits of these platforms.

* * *

35 Junia Howell and Elizabeth Korver-Glenn, Appraised: The Persistent Evaluation of White Neighborhood as More Valuable Than Communities of Color, Eruka (2022) https://static1.squarespace.com/static/62e84d924d2d8e5dff96ae2f/t/6364707034ee737d19dc76da/1667526772835/Howell+and+Korver-Glenn+Appraised_11_03_22.pdf.

36 See, e.g., Fannie Mae, Delivering Effective, Efficient, and Impartial Home Valuations Across America, https://singlefamily.fanniemae.com/valuation-modernization.

37 Tracy Jan and Elizabeth Dwoskin, Facebook Agrees to Overhaul Targeted Advertising System for Job, Housing, and Loan Ads After Discrimination Complaints, The Washington Post (March 19, 2019), Ava https://www.washingtonpost.com/business/economy/facebook-agrees-to-dismantle-targeted-advertising-system-for-job-housing-and-loan-ads-after-discrimination-complaints/2019/03/19/7dc9b5fa-4983-11e9-b79a-961983b7e0cd_story.html

38 See, DOJ Press Release, Justice Department and Meta Platforms, Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements (Jan. 9, 2023), https://www.justice.gov/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreaking.

* * *

Use of Large Language Models in Housing and Financial Services and Potential Challenges

Large Language Models (LLMs) are a significant advancement in natural language processing and AI. These models are characterized by their ability to process, understand, and generate human-like text. Some of their capabilities are:

1. Capabilities in Language Generation: LLMs offer unparalleled capabilities in language generation, which opens exciting opportunities for interaction design. They are highly context-dependent and can adapt to a wide range of linguistic styles and nuances.41

2. Importance in Psycholinguistics: While LLMs are not precise models of human linguistic processing, their success in language modeling makes them significant in the field of psycholinguistics. They serve as practical tools for exploring language and thought relationships.42

3. Enhancing Creativity: AI LLMs have contributed to creative writing, including newspaper articles, novels, and poetry. These models can generate creative and original text, demonstrating their potential in diverse creative applications.43 In summary, large language models represent a significant leap in AI's ability to interact with and understand human language. They are crucial for a variety of applications, from enhancing creative writing to contributing to the understanding of human language processing, despite facing challenges in certain aspects of language comprehension. However, due to lack of transparency and the fact that LLM applications are yet to mature, it is not immediately clear how lenders are using LLM applications in the product and service lines.

* * *

39 See, NFHA, et al. v. Redfin investigation summary, available at https://nationalfairhousing.org/redfin-investigation/.

40 See, NFHA Press Release, National Fair Housing Alliance and Redfin Agree to Settlement that Greatly Expands Access to Real Estate Services in Communities of Color (April 29, 2022), https://nationalfairhousing.org/national-fair-housing-alliance-and-redfin-agree-to-settlement-which-greatly-expands-access-to-real-estate-services-in-communities-of-color-%EF%BF%BC/.

41 Mina Lee, Percy Liang, and Qian Yang, CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities (2022), https://doi.org/10.1145/3491102.3502030.

42 Conor J. Houghton, N. Kazanina, and Priyanka Sukumaran, Beyond the Limitations of Any Imaginable Mechanism: Large Language Models and Psycholinguistics (2023), https://doi.org/10.48550/arXiv.2303.00077.

43 See id.

* * *

That said, LLMs can significantly impact the housing and financial services sectors, offering innovative solutions while presenting unique challenges. Here are some key use cases and potential challenges:

1. Use Cases in Housing and Financial Services:

a. Enhanced Customer Service: LLMs may improve customer service in housing and financial services by providing more accurate and efficient responses to customer inquiries.

b. Automated Document Analysis: These models can analyze legal documents, loan applications, and other paperwork, speeding up processes and potentially reducing human error.

c. Predictive Analytics: AI-driven predictive analytics can help in understanding market trends, predicting housing prices, and identifying investment opportunities in the financial sector.

d. Personalized Financial Advice: LLMs can offer personalized financial advice based on individual customer data, improving the customer experience in financial services.

e. Risk Assessment and Management: AI can play a crucial role in assessing risk in loan approvals, insurance underwriting, and investment decisions.

* * *

2. Potential Challenges:

a. Data Privacy and Security: Ensuring the confidentiality and security of sensitive personal and financial data is a major challenge, especially considering the vast amount of data processed by AI systems.

b. Bias and Fairness: There is a risk of AI systems inheriting biases from tainted training data, which can lead to unfair practices in loan approvals, housing allocations, financial advisories, and other instances.

c. Regulatory Compliance: AI systems must comply with existing financial regulations and housing laws, which can be complex and vary across states and regions.

d. Integration with Existing Systems: Integrating AI technologies with legacy systems in housing and financial services can be challenging due to compatibility issues.

e. Reliability and Accuracy: Ensuring the reliability and accuracy of AI-driven decisions and outputs, especially in critical housing and financial situations, is paramount.

f. Public Trust and Acceptance: Building public trust in AI systems, particularly in sensitive areas like financial advice and housing, is crucial for widespread adoption.

g. Challenges in Word Understanding: Despite their advanced capabilities, LLMs sometimes struggle with understanding words based on their dictionary definitions, indicating areas for improvement in word comprehension.44 While LLMs offer promising opportunities for innovation in housing and financial services, addressing challenges related to data security, bias, regulatory compliance, and public trust will be key to their successful implementation and acceptance.

* * *

44 Lutfi Kerem Senel and Hinrich Schutze Does She Wink or Does She Nod? A Challenging Benchmark for Evaluating Word Understanding of Language Models (2021), https://doi.org/10.18653/v1/2021.eacl-main.42.

* * *

Using AI to Promote Fairer Outcomes

Debiasing AI Systems Using Fairness Techniques

Algorithmic fairness techniques play a crucial role in mitigating biases and ensuring equitable outcomes in AI systems. These techniques can be categorized into pre-processing, in-processing, and post-processing methods, each addressing biases at different stages of the AI model lifecycle.

1. Pre-processing Techniques: These techniques focus on the initial stages of AI development, where the primary goal is to rectify biases present in the training data before it is fed into the model. By identifying and modifying biased data, pre-processing methods aim to prevent the AI system from learning and perpetuating these biases.45

2. In-processing Techniques: These are applied during the model training phase and involve modifying the learning algorithm to incorporate fairness. This might include adjusting the model's objective function to balance accuracy with fairness criteria. One approach is the integration of fairness constraints directly into the training process, as discussed by some researchers in their exploration of unified data and algorithm fairness through adversarial data augmentation and adaptive model fine-tuning. 46 3.

Post-processing Techniques: These methods are applied after the model has been trained. They adjust the output of the model to ensure fairness, often through recalibration of decision thresholds for different groups. This approach is particularly useful in scenarios where modifying the training process is not feasible or when dealing with legacy systems. The work of Berkel et al. on the effect of information presentation on fairness perceptions of machine learning predictors provides insights into how post-processing interventions can alter people's perceptions of AI fairness.46 .

In summary, algorithmic fairness techniques are integral to developing AI systems that are equitable and unbiased. These methods, spanning from the initial data preparation to the final model output, help in creating AI solutions that are not only effective but also fair and just.

* * *

Using AI to Detect the Risk of Discriminatory Practices and Policies

AI can significantly assist in the identification and mitigation of the risk of discriminatory practices and policies, particularly in housing and employment. This potential is explored through various stages and tools, including dataset analysis, internet or website crawling, and other enforcement-based AI tools. The following are some AI-driven approaches that support the identification and mitigation of discriminatory practices and policies, providing more equitable outcomes in housing, employment, and other critical areas.

* * *

Identifying Datasets that are Non-representative/Under-representative

AI can be used to identify datasets that are non-representative or under-representative of certain populations, which is crucial for uncovering discriminatory practices in housing and

other areas. The work by Kamikubo et al. on the representativeness of accessibility datasets highlights the importance of ensuring that datasets are representative and inclusive of various demographic groups to mitigate bias in AI applications.47

* * *

45 Nengfeng Zhou, Zach Zhang, Vijay Nair, Harsh Singhal, and Jie Chen, Bias, Fairness and Accountability with Artificial Intelligence and Machine Learning Algorithms (2022), https://doi.org/10.1111/insr.12492.

46 N. V. Berkel, Jorge Goncalves, D. Russo, S. Hosio, and M. Skov, Effect of Information Presentation on Fairness Perceptions of Machine Learning Predictors,.(2021), https://doi.org/10.1145/3411764.3445365.

* * *

Internet or Website Crawling Tools

Internet or website crawling tools are instrumental in gathering online data, including housing listings and employment advertisements, which can then be analyzed for patterns of potential discrimination. Internet crawling can extract large amounts of data, enabling AI systems to analyze trends and detect potentially discriminatory language or practices. These tools can allow users to search for discriminatory phrases and comments that can impede access to fair housing and lending opportunities.

* * *

Other Enforcement-based AI Tools

AI tools, particularly those leveraging natural language processing (NLP) and large language models, can analyze textual data to identify discriminatory language or patterns in documents, policies, and communications. For example, Lobo's work on bias in hate speech and toxicity detection illustrates how AI can identify discriminatory or biased language in large datasets, which can be applied to analyze housing and employment-related texts.48

* * *

Expanding Inclusive and Equitable Housing and Financial Services Opportunities

Using AI to Deepen Research

AI can significantly enhance the depth and effectiveness of housing and lending policy, enforcement, and legal research. This enhancement can be achieved through several applications:

1. Dataset Analysis for Policy and Enforcement: AI is instrumental in analyzing vast datasets to identify trends and patterns that might indicate discriminatory practices in housing and lending. The analysis can focus on various aspects, such as loan approval rates, interest rates charged, and geographical distribution of loans. These insights are critical for shaping fair housing policies and for enforcing anti-discrimination laws. The work by Wyly and Holloway on the disappearance of race in mortgage lending highlights the importance of data analysis in understanding and addressing issues in housing markets.49

2. Enforcement-based AI Tools Using NLP: Natural Language Processing (NLP) and large language models can be leveraged to analyze legal documents, policy papers, and communication records for housing and lending practices. These AI tools can identify discriminatory language patterns and assist in the legal interpretation of housing policies and practices. The research by Ntoutsi et al. on bias in data-driven AI systems provides an overview of the challenges and solutions in detecting bias, which is applicable to housing and lending legal research.50

* * *

47 Rie Kamikubo, Lining Wang, Crystal Marte, Amnah Mahmood and Hernisa Kacorri, Data Representativeness in Accessibility Datasets: A Meta-Analysis, (2022), https://doi.org/10.1145/3517428.3544826.

48 Paula Reyero Lobo, Bias in Hate Speech and Toxicity Detection (2022), https://doi.org/10.1145/3514094.3539519.

49 Elvin K. Wyly and S. Holloway, The Disappearance of Race in Mortgage Lending, Economic Geography (2002), https://doi.org/10.1111/j.1944-8287.2002.tb00181.x.

* * *

Leveraging AI to Design Systems with Equity

AI technologies can analyze alternative data sources, such as cash-flow underwriting data and rental housing payment histories, which are particularly beneficial for individuals with limited credit histories. AI and big data can capture weak signals in creditworthiness assessments, thereby improving financial inclusion and access to credit for traditionally underserved borrowers.51 This approach helps in expanding the scope of data considered for lending decisions, making them more inclusive and aligned with traditional ways lending decisions were assessed using the "5 Cs of credit": character, cash flow, capital, conditions, and collateral.

* * *

Using AI to Fix the Non-representative/Under-representative Data Problem

AI can be used to identify and rectify biases in non-representative or under-representative datasets, which is crucial for ensuring equitable housing and lending practices. The work of Jain and Verma highlights the importance of AI in making the credit underwriting process more accurate.52 By refining data to be more representative, AI can help lenders make more fair and equitable decisions.

* * *

Optimizing Privacy Protections Via AI Techniques

Privacy-enhancing techniques are essential in AI to ensure the protection of consumer data while utilizing it for housing and lending purposes.53 The work of Chen et al. shares lessons learned during the development of frameworks that aid in the correct use of privacy-enhancing technologies like homomorphic encryption and secure multi-party computation.54 These technologies are vital for developing AI systems that are both effective and respectful of consumer privacy.

* * *

50 Eirini Ntoutsi et al., Bias in Data-Driven AI Systems: An Introductory Survey (Jan. 14, 2020), https://arxiv.org/abs/2001.09762.

51 Hicham Sadok, Fadi Sakka and Mohammed El Hadi El Maknouzi, Artificial Intelligence and Bank Credit Analysis: A Review, Cogent Economics & Finance (2022), https://doi.org/10.1080/23322039.2021.2023262.

52 Aastha Jain and Deval Verma, Making Credit Underwriting Process More Accurate using ML (2022), https://doi.org/10.1109/ICACCM56405.2022.10009117.

53 Hannah Holloway, Snigdha Sharma, Samantha Gordon, and Dr. Michael Akinwumi, Privacy, Technology, and Fair Housing - A Case for Corporate and Regulatory Action" (Aug. 22, 2023), https://nationalfairhousing.org/privacy-technology-and-fair-housing-a-case-for-corporate-and-regulatory-action/.

54 Huili Chen, S. Hussain, Fabian Boemer, Emmanuel Stapf, A. Sadeghi, F. Koushanfar and Rosario Cammarota, Developing Privacy-preserving AI Systems: The Lessons Learned," 1-4 (2020), https://doi.org/10.1109/DAC18072.2020.9218662.

* * *

Using AI to Fix Zoning Challenges

Exclusive and restrictive zoning policies have been used over the decades to generate residential segregation, reduce affordable housing options, and thwart fair housing efforts.55 These policies contribute to the U.S. racial wealth and homeownership gaps as well as to the affordable housing crisis.56 It is estimated that the U.S. has a shortage of between 4 million to 7 million affordable housing units.57 The lack of fair and affordable housing opportunities is critical since housing is the number one driver of inflation.58 Innovators are using AI and other technologies to address zoning challenges. For example, AI can help analyze zoning regulations to identify barriers and allow developers to plan projects more efficiently. Some scholars and urban planners are examining the idea of using AI to automate the process of developing zoning codes.59 Using AI to develop and analyze zoning codes may make it easier and quicker to identify provisions in the ordinances that present barriers to fair housing and affordable housing. This would, in turn, enable legislators to address potential discrimination risks. Using AI in this way can also help jurisdictions meet their obligation to Affirmatively Further Fair Housing.60 Applying Fair Housing and Fair Lending Laws to AI

There are two key laws that prohibit discrimination in lending and housing: the Fair Housing Act and the Equal Credit Opportunity Act ("ECOA"). The Fair Housing Act prohibits any entity from discriminating in housing and mortgage lending on the basis of race, color, religion, national origin, sex (including sexual orientation), disability, and familial status (also known as "protected characteristics" or "protected classes" or "prohibited bases").61

* * *

55 Fair Share Housing Center, Dismantling Exclusionary Zoning: New Jersey's Blueprint for Overcoming Segregation, (April 2023), https://www.fairsharehousing.org/wp-content/uploads/2023/04/Dismantling-Exclusionary-Zoning_New-Jerseys-Blueprint-for-Overcoming-Segregation.pdf

56 Allison Hanley, Rethinking Zoning to Increase Affordable Housing, Journal of Housing & Community Development, National Association of Housing and Redevelopment Officials (Dec. 22, 2023), https://www.nahro.org/journal_article/rethinking-zoning-to-increase-affordable-housing/#:~:text=In%202021%2C%20home%20prices%20experienced,20%20percent%2C%20and%20rents%20surged.&text=Restrictive%20zoning%20practices%20significantly%20contribute,cities%20restricting%20or%20banning%20apartments.

57 Sam Khater, One of the Most Important Challenges our Industry will Face: The Significant Shortage of Starter Homes, Freddie Mac (April 15, 2021), https://www.freddiemac.com/perspectives/sam-khater/20210415-single-family-shortage; National Association of Realtors(R), Once-In-A-Generation Response Needed to Address Housing Supply Crisis (June 16, 2021), https://www.nar.realtor/newsroom/once-in-a-generation-response-needed-to-address-housing-supply-crisis; National Low Income Housing Coalition, The Gap: A Shortage of Affordable Rental Homes (March 2023), https://nlihc.org/gap

58 Katy O'Donnell, The Main Driver of Inflation Isn't What You Think It Is, Politico (March 18, 2022), https://www.politico.com/news/2022/03/18/housing-costs-inflation-00015808

59 Norman Wright, AICP, Using Generative AI to Draft Zoning Codes, American Planning Association (Oct. 2023), https://www.planning.org/publications/document/9277441/

60 Jurisdictions receiving federal funds for a housing or community development purpose must ensure all their laws, programs, and services are implemented in compliance with the Federal Fair Housing Act and in a manner that Affirmatively Furthers Fair Housing. This means zoning policies must not perpetuate discrimination, segregation, or other anti-fair housing principles. See National Fair Housing Alliance, Affirmatively Furthering Fair Housing, https://nationalfairhousing.org/issue/affirmatively-furthering-fair-housing/

* * *

The Fair Housing Act also requires entities receiving federal funds for a housing or community development purpose to disseminate those funds, as well as implement their programs and services in a way that Affirmatively Furthers Fair Housing. The ECOA prohibits "creditors" from discriminating in lending on the basis of race, color, religion, national origin, sex (including sexual orientation), marital status, age, and source of income.62 Generally, there are two methods of proving discrimination under either the Fair Housing Act or ECOA: "disparate treatment" or "disparate impact."63 Disparate treatment occurs when an entity explicitly, overtly, or intentionally treats people differently based on prohibited characteristics, such as race, national origin, or sex. Disparate treatment can be proven through direct evidence or indirect (or "circumstantial") evidence, for example comparator evidence, statistical evidence, or a pretextual explanation. Although disparate treatment is known as "intentional discrimination," the law does not actually require showing prejudice, animus, or even an intent to treat someone worse because of a protected class; the differential treatment is enough to establish a violation of law.64 An example of potential disparate treatment discrimination would be an AI model that explicitly included a protected classes (such as race) as a model variable, or that resulted in different, adverse outcomes on a prohibited basis (such as race) for similarly-situated individuals.

Disparate impact discrimination occurs when a (1) facially neutral policy or practice disproportionately harms members of a protected class, and either (2) the policy or practice does not advance a legitimate interest, or (3) a less discriminatory alternative to serve the legitimate interest exists. Disparities alone are not sufficient to impose disparate impact liability, and entities are not required to sacrifice legitimate business needs or ignore relevant business considerations. Disparate impact only requires entities to avoid considerations that disproportionately harm members of protected classes unnecessarily. An example of potential disparate impact discrimination would be an AI model that considers a mortgage applicant's arrest record, which is not predictive of default risk.

Institutions should be aware that they need adequate Compliance Management Systems (CMS) to monitor and test AI models for potential discrimination, search for less discriminatory alternatives, and implement appropriate controls for fair lending or fair housing risk.65 Courts and agencies have for decades recognized that disparate impact liability exists under laws like the Fair Housing Act, ECOA, and Title VII (prohibiting employment discrimination).66 Disparate impact liability has been a part of Regulation B, which implements ECOA, since the 1970s and was included in the federal agencies' 1994 Policy Statement on Discrimination in Lending.67 It has also been sanctioned by the U.S. Supreme Court and 11 appellate courts.68 Moreover, federal agencies have long provided guidance informing institutions of their obligations for third party oversight.69 Institutions should be aware that using a third-party AI model does not insulate them from liability.

* * *

61 The Fair Housing Act: 42 U.S.C. Sec. 3601, et seq.; HUD's implementing regulation: 24 CFR Part 100.

62 ECOA: 15 U.S.C. Sec. 1619(a); CFPB's Regulation B: 12 CFR Part 1002.

63 Based on legal precedent, the federal financial regulators have also based fair lending risk assessments on these theories of discrimination. See FFIEC, Interagency Fair Lending Examination Procedures (2009), https://www.ffiec.gov/pdf/fairlend.pdf.

64 See 12 C.F.R. Part 1002, 4(a)-1: "Disparate treatment on a prohibited basis is illegal whether or not it results from a conscious intent to discriminate."

65 See, e.g., Relman, Colfax, Initial Report of the Independent Monitor, Fair Lending Monitorship of Upstart Network's Lending Model (April 14, 2021), https://www.relmanlaw.com/media/cases/1086_Upstart%20Initial%20Report%20-%20Final.pdf.

66 See, e.g., 12 C.F.R. part 1002, Supp. I,

6(a)-2 (ECOA articulation); 24 C.F.R. Sec. 100.500(c)(1) (FHA articulation); 42 U.S.C. Sec. 2000e-2(k) (Title VII articulation).

* * *

Policy Recommendations

The United States Must Enact Comprehensive Legislation to Advance Responsible AI

While there are significant risks of bias and discrimination in AI systems and their impact on housing and financial services, the risks are not insurmountable. The U.S. must play a leadership role in advancing Responsible AI principles and tech equity. Leading the world on these issues includes passing comprehensive legislation that forms the basis for sound policies, systems, practices, and frameworks for Responsible AI. While much of the world's technological innovations are developed in the U.S., other nations are significantly stepping up their efforts by building the infrastructure needed to spur AI innovations. This includes supporting education and training in the field and implementing rules to govern AI. The U.S. is behind the curve, and in some cases playing catch-up to other nations. The U.S. must lead the world in ensuring technologies are fair and beneficial; do not harm people and communities; and promote ideals of freedom, including ensuring robust privacy protections, equality, and equity. For these reasons, Congress should pass new legislation that mitigates the risk of algorithmic bias and ensures fairer structures by:

Ensuring Strong Civil and Human Rights Protections

Congress should develop AI legislation that reflects civil and human rights principles that are foundational to America's ideals of freedom and equality. These laws also must create equity. Civil rights, human rights, and consumer protection organizations lack the resources to ensure technologies are beneficial and not harmful, which means Congress must increase federal funding and new programs to support effective oversight and accountability. For example, a research partnership could be formed between the National Institute of Standards and Technology (NIST), civil rights organizations, consumer protection groups, non-profit trusted research agencies, and financial institutions that rely on AI to evaluate how AI or machine learning models affect fair housing and lending. Congress could also encourage the National Science Foundation to ensure that a portion of the considerable allocations for research on AI focus on the implications of using AI in housing and financial services.

* * *

67 DOJ, HUD, FTC, Federal Housing Finance Board, Federal Reserve, FDIC, OCC, Office of Thrift Supervision, and NCUA, Interagency Policy Statement on Fair Lending (1994) https://www.govinfo.gov/content/pkg/FR-1994-04-15/html/94-9214.htm.

68 See Texas Dep't of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, 135 S. Ct. 2507 (2015).

69 The agencies recently replaced each of their separate policies dating as early as 2008 for joint guidance. See Federal Reserve, FDIC, OCC, Interagency Guidance on Third Party Relationships: Risk Managment, 88 Fed. Reg. 37920 (June 9, 2023) https://www.govinfo.gov/content/pkg/FR-2023-06-09/pdf/2023-12340.pdf.

* * *

Ensuring Compliance with Existing Civil Rights and Consumer Protection Laws

Congress should ensure that the federal agencies issue robust policies that remind institutions of their legal obligations to test AI models for potential disparate treatment or disparate impact discrimination.

First, all of the federal agencies with responsibility for supervision and/or enforcement of the Fair Housing Act and/or ECOA (collectively, the Agencies)70 should emphasize that discrimination in AI models is illegal, including AI models developed or deployed by third parties. In 2023, the DOJ, FTC, CFPB, and EEOC issued a joint statement regarding enforcement efforts to protect the public from bias in AI and automated systems.71 The remaining federal agencies should immediately issue a similar announcement.

Second, consistent with the Uniform Interagency Consumer Compliance Rating System72 and the Model Risk Management Guidance,73 the Agencies should ensure that financial institutions have appropriate Compliance Management Systems that effectively identify and control risks related to AI models, including the risk of discriminatory outcomes for consumers. Where a financial institution's use of AI indicates weaknesses in their Compliance Management System or violations of law, the Agencies should use all of the tools in their toolbelt to quickly address and prevent consumer harm, including issuing supervisory Matters Requiring Attention; entering into a non-public enforcement action, such as a Memorandum of Understanding; referring a pattern or practice of discrimination to the DOJ; or entering into a public enforcement action. The Agencies have already provided clear guidance (e.g., the Uniform Consumer Compliance Rating System) that financial institutions must appropriately identify, monitor, and address compliance risks, and the Agencies should not hesitate to act within the scope of their authority. When possible, the Agencies should explain to the public the risks that they have observed and the actions taken in order to bolster the public's trust in appropriate oversight, and provide clear examples to guide the industry.

* * *

(Continues with Part 2 of 2)

* * *

Original text here: https://www.banking.senate.gov/imo/media/doc/rice_testimony_1-31-24.pdf

Older

Rep. Schiff Joins Democratic Women Caucus in Demanding Health Insurers Fully Cover Birth Control

Newer

Celebrating 100 years

Advisor News

  • Reynolds signs temporary tax hike
  • Gov. Kim Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
  • Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
  • Temporary tax hike to fill Iowa Medicaid gap heads to governor’s desk
  • Gov. Kim Reynolds signs health insurance premium tax increase into law
More Advisor News

Annuity News

  • Corebridge, Equitable merge to create potential new annuity sales king
  • LIMRA: Final retail annuity sales total $464.1 billion in 2025
  • How annuities can enhance retirement income for post-pension clients
  • We can help find a loved one’s life insurance policy
  • 2025: A record-breaking year for annuity sales via banks and BDs
More Annuity News

Health/Employee Benefits News

  • SOUTHERN MN REPUBLICAN VOICES: Health care, American style
  • Reynolds signs temporary tax hike
  • Gov. Kim Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
  • Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
  • Temporary tax hike to fill Iowa Medicaid gap heads to governor’s desk
More Health/Employee Benefits News

Life Insurance News

  • Corebridge, Equitable Merger Creates $1.5tr Platfrom
  • AM Best Removes from Under Review with Positive Implications and Affirms Credit Ratings of Sompo Seguros Mexico S.A. de C.V.
  • Corebridge, Equitable merge to create potential new annuity sales king
  • Aflac adds new long-term care rider
  • AM Best Affirms Credit Ratings of Nan Shan General Insurance Co., Ltd.
More Life Insurance News

- Presented By -

Top Read Stories

More Top Read Stories >

NEWS INSIDE

  • Companies
  • Earnings
  • Economic News
  • INN Magazine
  • Insurtech News
  • Newswires Feed
  • Regulation News
  • Washington Wire
  • Videos

FEATURED OFFERS

Elevate Your Practice with Pacific Life
Taking your business to the next level is easier when you have experienced support.

Your Cap. Your Term. Locked.
Oceanview CapLock™. One locked cap. No annual re-declarations. Clear expectations from day one.

Ready to make your client presentations more engaging?
EnsightTM marketing stories, available with select Allianz Life Insurance Company of North America FIAs.

Unlock the Future of Index-Linked Solutions
Join industry leaders shaping next-gen index strategies, distribution, and innovation.

Press Releases

  • RFP #T01725
  • Insurate expands workers’ comp into: CA, FL, LA, NC, NJ, PA, VA
  • LifeSecure Insurance Company Announces Retirement of Brian Vestergaard, Additions to Executive Leadership
  • RFP #T02226
  • YourMedPlan Appoints Kevin Mercier as Executive Vice President of Business Development
More Press Releases > Add Your Press Release >

How to Write For InsuranceNewsNet

Find out how you can submit content for publishing on our website.
View Guidelines

Topics

  • Advisor News
  • Annuity Index
  • Annuity News
  • Companies
  • Earnings
  • Fiduciary
  • From the Field: Expert Insights
  • Health/Employee Benefits
  • Insurance & Financial Fraud
  • INN Magazine
  • Insiders Only
  • Life Insurance News
  • Newswires
  • Property and Casualty
  • Regulation News
  • Sponsored Articles
  • Washington Wire
  • Videos
  • ———
  • About
  • Meet our Editorial Staff
  • Advertise
  • Contact
  • Newsletters

Top Sections

  • AdvisorNews
  • Annuity News
  • Health/Employee Benefits News
  • InsuranceNewsNet Magazine
  • Life Insurance News
  • Property and Casualty News
  • Washington Wire

Our Company

  • About
  • Advertise
  • Contact
  • Meet our Editorial Staff
  • Magazine Subscription
  • Write for INN

Sign up for our FREE e-Newsletter!

Get breaking news, exclusive stories, and money- making insights straight into your inbox.

select Newsletter Options
Facebook Linkedin Twitter
© 2026 InsuranceNewsNet.com, Inc. All rights reserved.
  • Terms & Conditions
  • Privacy Policy
  • InsuranceNewsNet Magazine

Sign in with your Insider Pro Account

Not registered? Become an Insider Pro.
Insurance News | InsuranceNewsNet