Senate Banking Subcommittee Issues Testimony From National Fair Housing Alliance President Rice (Part 1 of 2)
* * *
Chairwoman Smith, Ranking Member Lummis, and other distinguished members of the Senate Subcommittee on Housing, Transportation, and
* * *
NFHA's Evolution in Addressing Algorithmic and AI Bias & Description of NFHA's Responsible AI Program
NFHA has addressed harms associated with AI and automated systems since its inception in 1988. We first concentrated our efforts on prohibiting or restricting the use of discriminatory automated systems such as credit and insurance scoring, underwriting, and pricing models, in housing and financial services. Early settlements with entities like Prudential,
* Tech Equity: We focus on developing and advocating for methodologies that ensure automated systems offer equitable access to housing opportunities.
* Data Privacy: We strive to test and promote technologies that balance consumer privacy with the need for data access to eliminate bias in automated systems.
* Explainability: We advocate for consumers' right to explanations for automated decisions and work to test and promote methodologies that clarify the reasoning or design behind automated systems.
* Reliability: We focus on testing and advancing techniques to ensure only safe and valid automated systems are used in housing applications.
* Human-Centered Alternative Systems: We work on advancing technical and policy solutions to determine when human-centered alternatives should take precedence over automated systems in housing decisions, particularly when data quality is poor, infrastructure is inadequate, or there is a lack of social awareness about harms of automated systems.
Since launching our Responsible AI work, NFHA has contributed to, advocated for, and created technical and policy solutions that advance responsible use of technologies in housing, risk management frameworks, and developed a state-of-the-art framework for auditing algorithmic systems,1 and other policies. including the
The term "artificial intelligence" was first used by Professor
* * *
1
2
3 See White House Executive Order on the Safe, Secure, and
4
5
6 See Fair,
* * *
The Definition of AI
AI focuses on creating machines capable of intelligent behavior. It involves the computational understanding and creation of artifacts that exhibit intelligent behavior.7 The scope of AI is broad, encompassing various aspects such as machine intelligence, cognitive functions, or intelligent agents. It also includes systems that mimic human behavior. To guarantee existing automated systems in the housing and lending sectors are not overlooked in the pursuit of implementing AI that is secure, reliable, and free from discrimination, NFHA characterizes AI as a computerized mechanism capable of performing one or more of the following functions:
* Discerning patterns in data;
* Conducting exploratory, predictive, prescriptive, or diagnostic analyses based on data, logical reasoning, or established rules; or
* Generating patterns utilizing data, logic, or rules.
NFHA views a system as an amalgamation of algorithm, model tech infrastructure, and human elements. Fundamentally, for NFHA, AI represents a sociotechnical system, integrating technical capabilities with societal and human components. This practical definition is consistent with how federal agencies characterized AI in a 2023 joint statement on enforcement efforts against algorithmic bias and discrimination.8
* * *
Use of AI in Housing and Financial Services and Potential Challenges
Tenant Screening Selection Systems
To evaluate applicants, landlords sometimes purchase tenant screening services and tools from companies that offer products that "adjudicate" or "score" a rental applicant based on AI or other automated tools. Tenant screening selection systems have been a focal point of federal regulators and housing advocates for several years.
* * *
7
8
9
10 NFHA, Comment to the
* * *
As it relates the tenant screening systems, fair housing concerns are magnified because of potentially erroneous data that can be used to develop the models and because of the lack of transparency regarding tenant screening algorithms' predictiveness, design, development, testing process, and data inputs. There are also significant concerns about the quality of risk management efforts pertaining to these systems. Concerns about these systems must be addressed as they will likely impact, in one form or another, nearly all of the 100 million+ renters in the
* * *
Dynamic Rental Pricing Systems
In the rental housing market, dynamic pricing systems based on AI have become a widespread feature. Rent prices can change weekly, daily, or more frequently, based on any number of factors that are hidden from consumers making it profoundly difficult for prospective tenants to understand why they are being charged a certain rate. Generally, the factors that go into establishing rent pricing are kept from public view. However, ProPublica has reported that the company that produces the leading rental pricing software (
* * *
11
12
13
14 In re Real Page: Rental Software Antitrust Litigation (II), Case No. 3:32-MD-3071,
15 See
16 See
* * *
Credit Scoring Systems
Credit scoring systems are algorithmic models that attempt to predict a borrower's risk and how well that person is likely to repay their debt obligations. These systems typically generate a numerical score used to help players in the financial services system determine the creditworthiness of a consumer. They are often used as part of a lender's decisions on underwriting and pricing.
Concerns about the potential for credit scoring models to perpetuate bias against certain groups have been raised over the decades. For the most part, the data used to build credit scoring models is generated from data housed at the three main credit reporting agencies (CRAs): Equifax,
* * *
17 See Federal Reserve
18 For more information about the
19
20
* * *
According to the
* Number and type of credit accounts
* Timely payment of bills
* Amount/percentage of available credit
* Collection actions
* Amount of outstanding debt
* Age of accounts
Each of the above components present systemic challenges for underserved consumers like those living in urban and rural areas as those communities can often be credit deserts.21 Additionally, consumers impacted by historical discrimination are negatively impacted by these factors because they have less access to mainstream credit. For example, people of color disproportionately were steered to predatory subprime loans that carried abusive terms and conditions. As a result, these consumers disproportionately experienced collection actions not because these consumers were not creditworthy, but rather, because these consumers were targeted by discriminatory practices.22 There are racial barriers to economic opportunities and these barriers are more prominent for people of color, particularly Black people,
* * *
Insurance Scoring, Underwriting, Rating, and Claims Systems
Insurance scoring systems are algorithmic models built using actuarial data designed to predict the likelihood of a consumer experiencing a risk-related event such as filing a homeowners insurance claim or filing an auto claim. Insurance scoring systems can be built using data from the CRAs; however, model developers also rely on proprietary company data and information purchased from data providers that reflect consumers' claims experiences, weather-related data, and other information. Unlike credit scoring systems, insurance scoring models are not typically designed to determine whether a consumer will pay their insurance premium, but rather, the consumer's risk profile related to an event that would present a monetary exposure for the insurance company. The score can be used to help underwrite business and/or determine the price a consumer should pay for insurance.
* * *
21 Trulia and NFHA, 50 Years After the Fair Housing Act - Inequality Lingers (
22
23
24
25
* * *
Insurance scoring systems, like other AI systems can perpetuate bias in myriad ways.26 NFHA has challenged discriminatory provisions in insurance scoring systems, most specifically in a matter brought against the
* * *
Additionally, the
Automated underwriting systems and risk-based pricing systems manifest and perpetuate bias as well. These systems rely on and are built using data contained in the CRAs. The data captured by CRAs is under-representative as it is missing critical information, like rental housing payment data, that can accurately reflect a borrower's willingness and ability to pay financial obligations. Data captured by the CRAs also includes information tainted by bias against underserved groups. Unfortunately, redlining and housing discrimination are still everyday occurrences30 and when consumers experience discrimination, that bias is reflected in the data captured by the CRAs.
* * *
26 AI systems can manifest discrimination by using biased or non-representative data sets, and by other means. For a detailed discussion about the many ways AI can perpetuate discrimination, see Testimony of
27 See National Fair Housing Alli. v. Prudential Ins. Co., 208 F. Supp. 2d 46 (D.D.C. 2002), https://casetext.com/case/national-fair-housing-alli-v-prudential-ins-co 28 Ronda Lee, AI Can Perpetuate Racial Bias in Insurance Underwriting, Yahoo!Money (
29
* * *
Data is not blind, nor is it harmless. It can be dangerous and toxic particularly when it manifests the discrimination inherent in our systems. For example, researchers at
* * *
Automated Valuation Models
The use of Automated Valuation Models (AVMs) for some portion of the home valuation process has been proliferating, especially as a potential remedy to multiple complaints of consumers of color having to "whitewash" their home and remove all indications of their race and ethnicity to receive a fair value.32 By statute, an AVM is defined as a "computerized model used by mortgage originators and secondary market issuers to determine the collateral worth of a mortgage secured by a consumer's principal dwelling."33 An AVM may use AI or some other automated system to serve various purposes. In some cases, a secondary market issuer or lender may use the AVM in place of a human appraiser; in some cases, the human appraiser may develop the opinion of value using an AVM; and in still other cases, a secondary market issuer or lender may use an AVM as a check on the human appraiser. The AVM may be faster, cheaper, more consistent, and less prone to bias, but AVMs pose at least two problems.
First, the AVM still relies on the traditional sales comparison approach in which the home valuation is based on recent sales of comparable homes in a comparable neighborhood. Because of America's long history of redlining and segregation, the AVM is likely using historical data that undervalues homes in formerly redlined areas.34 For example, research has shown that in 2021, homes in White neighborhoods were appraised at values nearly 250% higher than similar homes in similar Black neighborhoods and nearly 278% higher than similar homes in similar Latino neighborhoods.35 These disparities then generate data points used by the AVM, which can perpetuate the disparity in future transactions.
* * *
30 See, Recent Accomplishments of the Housing and Civil Enforcement Section,
31
32 See, e.g.,
33 12 U.S.C. Sec. 3354(d).
34 See, NFHA and
* * *
Second, using AVMs may create a bifurcated valuation system. AVMs tend to work best in neighborhoods with similar homes that generate multiple data points, which means the models excel in suburban subdivisions, which tend to be majority-White. AVMs do not work as well in neighborhoods of color, which tend to have older properties with varied repairs and upgrades. Also, consumers of color tend to stay in their homes longer, which means fewer sales data points for the AVM.
* * *
Marketing Systems and Digital Redlining
The use of AI and other algorithms have changed the face of the billion-dollar
Fair housing groups challenged
* * *
35
36 See, e.g.,
37
38 See,
* * *
Use of Large Language Models in Housing and Financial Services and Potential Challenges
Large Language Models (LLMs) are a significant advancement in natural language processing and AI. These models are characterized by their ability to process, understand, and generate human-like text. Some of their capabilities are:
1. Capabilities in Language Generation: LLMs offer unparalleled capabilities in language generation, which opens exciting opportunities for interaction design. They are highly context-dependent and can adapt to a wide range of linguistic styles and nuances.41
2. Importance in Psycholinguistics: While LLMs are not precise models of human linguistic processing, their success in language modeling makes them significant in the field of psycholinguistics. They serve as practical tools for exploring language and thought relationships.42
3. Enhancing Creativity: AI LLMs have contributed to creative writing, including newspaper articles, novels, and poetry. These models can generate creative and original text, demonstrating their potential in diverse creative applications.43 In summary, large language models represent a significant leap in AI's ability to interact with and understand human language. They are crucial for a variety of applications, from enhancing creative writing to contributing to the understanding of human language processing, despite facing challenges in certain aspects of language comprehension. However, due to lack of transparency and the fact that LLM applications are yet to mature, it is not immediately clear how lenders are using LLM applications in the product and service lines.
* * *
39 See, NFHA, et al. v. Redfin investigation summary, available at https://nationalfairhousing.org/redfin-investigation/.
40 See, NFHA Press Release,
41
42
43 See id.
* * *
That said, LLMs can significantly impact the housing and financial services sectors, offering innovative solutions while presenting unique challenges. Here are some key use cases and potential challenges:
1. Use Cases in Housing and Financial Services:
a. Enhanced Customer Service: LLMs may improve customer service in housing and financial services by providing more accurate and efficient responses to customer inquiries.
b. Automated Document Analysis: These models can analyze legal documents, loan applications, and other paperwork, speeding up processes and potentially reducing human error.
c. Predictive Analytics: AI-driven predictive analytics can help in understanding market trends, predicting housing prices, and identifying investment opportunities in the financial sector.
d. Personalized Financial Advice: LLMs can offer personalized financial advice based on individual customer data, improving the customer experience in financial services.
e. Risk Assessment and Management: AI can play a crucial role in assessing risk in loan approvals, insurance underwriting, and investment decisions.
* * *
2. Potential Challenges:
a. Data Privacy and Security: Ensuring the confidentiality and security of sensitive personal and financial data is a major challenge, especially considering the vast amount of data processed by AI systems.
b. Bias and Fairness: There is a risk of AI systems inheriting biases from tainted training data, which can lead to unfair practices in loan approvals, housing allocations, financial advisories, and other instances.
c. Regulatory Compliance: AI systems must comply with existing financial regulations and housing laws, which can be complex and vary across states and regions.
d. Integration with Existing Systems: Integrating AI technologies with legacy systems in housing and financial services can be challenging due to compatibility issues.
e. Reliability and Accuracy: Ensuring the reliability and accuracy of AI-driven decisions and outputs, especially in critical housing and financial situations, is paramount.
f.
g. Challenges in Word Understanding: Despite their advanced capabilities, LLMs sometimes struggle with understanding words based on their dictionary definitions, indicating areas for improvement in word comprehension.44 While LLMs offer promising opportunities for innovation in housing and financial services, addressing challenges related to data security, bias, regulatory compliance, and public trust will be key to their successful implementation and acceptance.
* * *
44
* * *
Using AI to Promote Fairer Outcomes
Debiasing AI Systems Using Fairness Techniques
Algorithmic fairness techniques play a crucial role in mitigating biases and ensuring equitable outcomes in AI systems. These techniques can be categorized into pre-processing, in-processing, and post-processing methods, each addressing biases at different stages of the AI model lifecycle.
1. Pre-processing Techniques: These techniques focus on the initial stages of AI development, where the primary goal is to rectify biases present in the training data before it is fed into the model. By identifying and modifying biased data, pre-processing methods aim to prevent the AI system from learning and perpetuating these biases.45
2. In-processing Techniques: These are applied during the model training phase and involve modifying the learning algorithm to incorporate fairness. This might include adjusting the model's objective function to balance accuracy with fairness criteria. One approach is the integration of fairness constraints directly into the training process, as discussed by some researchers in their exploration of unified data and algorithm fairness through adversarial data augmentation and adaptive model fine-tuning. 46 3.
Post-processing Techniques: These methods are applied after the model has been trained. They adjust the output of the model to ensure fairness, often through recalibration of decision thresholds for different groups. This approach is particularly useful in scenarios where modifying the training process is not feasible or when dealing with legacy systems. The work of Berkel et al. on the effect of information presentation on fairness perceptions of machine learning predictors provides insights into how post-processing interventions can alter people's perceptions of AI fairness.46 .
In summary, algorithmic fairness techniques are integral to developing AI systems that are equitable and unbiased. These methods, spanning from the initial data preparation to the final model output, help in creating AI solutions that are not only effective but also fair and just.
* * *
Using AI to Detect the Risk of Discriminatory Practices and Policies
AI can significantly assist in the identification and mitigation of the risk of discriminatory practices and policies, particularly in housing and employment. This potential is explored through various stages and tools, including dataset analysis, internet or website crawling, and other enforcement-based AI tools. The following are some AI-driven approaches that support the identification and mitigation of discriminatory practices and policies, providing more equitable outcomes in housing, employment, and other critical areas.
* * *
Identifying Datasets that are Non-representative/Under-representative
AI can be used to identify datasets that are non-representative or under-representative of certain populations, which is crucial for uncovering discriminatory practices in housing and
other areas. The work by Kamikubo et al. on the representativeness of accessibility datasets highlights the importance of ensuring that datasets are representative and inclusive of various demographic groups to mitigate bias in AI applications.47
* * *
45 Nengfeng Zhou,
46
* * *
Internet or Website Crawling Tools
Internet or website crawling tools are instrumental in gathering online data, including housing listings and employment advertisements, which can then be analyzed for patterns of potential discrimination. Internet crawling can extract large amounts of data, enabling AI systems to analyze trends and detect potentially discriminatory language or practices. These tools can allow users to search for discriminatory phrases and comments that can impede access to fair housing and lending opportunities.
* * *
Other Enforcement-based AI Tools
AI tools, particularly those leveraging natural language processing (NLP) and large language models, can analyze textual data to identify discriminatory language or patterns in documents, policies, and communications. For example, Lobo's work on bias in hate speech and toxicity detection illustrates how AI can identify discriminatory or biased language in large datasets, which can be applied to analyze housing and employment-related texts.48
* * *
Expanding Inclusive and
Using AI to
AI can significantly enhance the depth and effectiveness of housing and lending policy, enforcement, and legal research. This enhancement can be achieved through several applications:
1. Dataset Analysis for Policy and Enforcement: AI is instrumental in analyzing vast datasets to identify trends and patterns that might indicate discriminatory practices in housing and lending. The analysis can focus on various aspects, such as loan approval rates, interest rates charged, and geographical distribution of loans. These insights are critical for shaping fair housing policies and for enforcing anti-discrimination laws. The work by Wyly and Holloway on the disappearance of race in mortgage lending highlights the importance of data analysis in understanding and addressing issues in housing markets.49
2. Enforcement-based AI Tools Using NLP: Natural Language Processing (NLP) and large language models can be leveraged to analyze legal documents, policy papers, and communication records for housing and lending practices. These AI tools can identify discriminatory language patterns and assist in the legal interpretation of housing policies and practices. The research by Ntoutsi et al. on bias in data-driven AI systems provides an overview of the challenges and solutions in detecting bias, which is applicable to housing and lending legal research.50
* * *
47
48
49
* * *
Leveraging AI to
AI technologies can analyze alternative data sources, such as cash-flow underwriting data and rental housing payment histories, which are particularly beneficial for individuals with limited credit histories. AI and big data can capture weak signals in creditworthiness assessments, thereby improving financial inclusion and access to credit for traditionally underserved borrowers.51 This approach helps in expanding the scope of data considered for lending decisions, making them more inclusive and aligned with traditional ways lending decisions were assessed using the "5 Cs of credit": character, cash flow, capital, conditions, and collateral.
* * *
Using AI to Fix the Non-representative/Under-representative Data Problem
AI can be used to identify and rectify biases in non-representative or under-representative datasets, which is crucial for ensuring equitable housing and lending practices. The work of Jain and Verma highlights the importance of AI in making the credit underwriting process more accurate.52 By refining data to be more representative, AI can help lenders make more fair and equitable decisions.
* * *
Optimizing Privacy Protections Via AI Techniques
Privacy-enhancing techniques are essential in AI to ensure the protection of consumer data while utilizing it for housing and lending purposes.53 The work of Chen et al. shares lessons learned during the development of frameworks that aid in the correct use of privacy-enhancing technologies like homomorphic encryption and secure multi-party computation.54 These technologies are vital for developing AI systems that are both effective and respectful of consumer privacy.
* * *
50 Eirini Ntoutsi et al., Bias in Data-Driven AI Systems: An
51
52
53
54
* * *
Using AI to Fix Zoning Challenges
Exclusive and restrictive zoning policies have been used over the decades to generate residential segregation, reduce affordable housing options, and thwart fair housing efforts.55 These policies contribute to the
There are two key laws that prohibit discrimination in lending and housing: the Fair Housing Act and the Equal Credit Opportunity Act ("ECOA"). The Fair Housing Act prohibits any entity from discriminating in housing and mortgage lending on the basis of race, color, religion, national origin, sex (including sexual orientation), disability, and familial status (also known as "protected characteristics" or "protected classes" or "prohibited bases").61
* * *
55 Fair Share Housing Center, Dismantling Exclusionary Zoning:
56
57
58
59
60 Jurisdictions receiving federal funds for a housing or community development purpose must ensure all their laws, programs, and services are implemented in compliance with the Federal Fair Housing Act and in a manner that
* * *
The Fair Housing Act also requires entities receiving federal funds for a housing or community development purpose to disseminate those funds, as well as implement their programs and services in a way that
Disparate impact discrimination occurs when a (1) facially neutral policy or practice disproportionately harms members of a protected class, and either (2) the policy or practice does not advance a legitimate interest, or (3) a less discriminatory alternative to serve the legitimate interest exists. Disparities alone are not sufficient to impose disparate impact liability, and entities are not required to sacrifice legitimate business needs or ignore relevant business considerations. Disparate impact only requires entities to avoid considerations that disproportionately harm members of protected classes unnecessarily. An example of potential disparate impact discrimination would be an AI model that considers a mortgage applicant's arrest record, which is not predictive of default risk.
Institutions should be aware that they need adequate Compliance Management Systems (CMS) to monitor and test AI models for potential discrimination, search for less discriminatory alternatives, and implement appropriate controls for fair lending or fair housing risk.65 Courts and agencies have for decades recognized that disparate impact liability exists under laws like the Fair Housing Act, ECOA, and Title VII (prohibiting employment discrimination).66 Disparate impact liability has been a part of Regulation B, which implements ECOA, since the 1970s and was included in the federal agencies' 1994 Policy Statement on Discrimination in Lending.67 It has also been sanctioned by the
* * *
61 The Fair Housing Act: 42 U.S.C. Sec. 3601, et seq.; HUD's implementing regulation: 24 CFR Part 100.
62 ECOA: 15 U.S.C. Sec. 1619(a);
63 Based on legal precedent, the federal financial regulators have also based fair lending risk assessments on these theories of discrimination. See
64 See 12 C.F.R. Part 1002, 4(a)-1: "Disparate treatment on a prohibited basis is illegal whether or not it results from a conscious intent to discriminate."
65 See, e.g., Relman,
66 See, e.g., 12 C.F.R. part 1002, Supp. I,
6(a)-2 (ECOA articulation); 24 C.F.R. Sec. 100.500(c)(1) (FHA articulation); 42 U.S.C. Sec. 2000e-2(k) (Title VII articulation).
* * *
Policy Recommendations
The United States Must Enact Comprehensive Legislation to Advance Responsible AI
While there are significant risks of bias and discrimination in AI systems and their impact on housing and financial services, the risks are not insurmountable. The
Ensuring Strong Civil and Human Rights Protections
* * *
67
68 See Texas Dep't of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, 135 S. Ct. 2507 (2015).
69 The agencies recently replaced each of their separate policies dating as early as 2008 for joint guidance. See
* * *
Ensuring Compliance with Existing Civil Rights and Consumer Protection Laws
First, all of the federal agencies with responsibility for supervision and/or enforcement of the Fair Housing Act and/or ECOA (collectively, the Agencies)70 should emphasize that discrimination in AI models is illegal, including AI models developed or deployed by third parties. In 2023, the
Second, consistent with the Uniform Interagency Consumer Compliance Rating System72 and the Model Risk Management Guidance,73 the Agencies should ensure that financial institutions have appropriate Compliance Management Systems that effectively identify and control risks related to AI models, including the risk of discriminatory outcomes for consumers. Where a financial institution's use of AI indicates weaknesses in their Compliance Management System or violations of law, the Agencies should use all of the tools in their toolbelt to quickly address and prevent consumer harm, including issuing supervisory Matters Requiring Attention; entering into a non-public enforcement action, such as a Memorandum of Understanding; referring a pattern or practice of discrimination to the
* * *
(Continues with Part 2 of 2)
* * *
Original text here: https://www.banking.senate.gov/imo/media/doc/rice_testimony_1-31-24.pdf



Rep. Schiff Joins Democratic Women Caucus in Demanding Health Insurers Fully Cover Birth Control
Celebrating 100 years
Advisor News
- Reynolds signs temporary tax hike
- Gov. Kim Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
- Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
- Temporary tax hike to fill Iowa Medicaid gap heads to governor’s desk
- Gov. Kim Reynolds signs health insurance premium tax increase into law
More Advisor NewsAnnuity News
- Corebridge, Equitable merge to create potential new annuity sales king
- LIMRA: Final retail annuity sales total $464.1 billion in 2025
- How annuities can enhance retirement income for post-pension clients
- We can help find a loved one’s life insurance policy
- 2025: A record-breaking year for annuity sales via banks and BDs
More Annuity NewsHealth/Employee Benefits News
- SOUTHERN MN REPUBLICAN VOICES: Health care, American style
- Reynolds signs temporary tax hike
- Gov. Kim Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
- Reynolds signs temporary tax hike to address Iowa Medicaid shortfall
- Temporary tax hike to fill Iowa Medicaid gap heads to governor’s desk
More Health/Employee Benefits NewsLife Insurance News
- Corebridge, Equitable Merger Creates $1.5tr Platfrom
- AM Best Removes from Under Review with Positive Implications and Affirms Credit Ratings of Sompo Seguros Mexico S.A. de C.V.
- Corebridge, Equitable merge to create potential new annuity sales king
- Aflac adds new long-term care rider
- AM Best Affirms Credit Ratings of Nan Shan General Insurance Co., Ltd.
More Life Insurance News