Senate Judiciary Subcommittee Issues Testimony From Lawyers' Committee for Civil Rights Under Law President Hewitt
* * *
I. Introduction
Chair Klobuchar, Ranking Member Lee, and Members of the Subcommittee, thank you for the opportunity to testify today about the impact of algorithms on consumer and civil rights. My name is
The Lawyers' Committee is a nonpartisan, nonprofit organization, formed in 1963 at the request of President
Equal opportunity and civil rights are intertwined with technological advancement. In the consumer context, algorithms are used to make decisions about all aspects of peoples' lives, determining who can rent a house, who can get a loan, who can get a deal, and consequentially--who cannot. One of the greatest civil rights challenges of our generation is to ensure that our new data-driven economy does not replicate or amplify existing discrimination. To ensure that technology serves all of us. But achieving the full measure of freedom in a data-driven economy also requires freedom from discrimination, which is increasingly amplified online through algorithmic bias, digital redlining, and pervasive surveillance.
Although algorithmic systems are widely used, they pose a high risk of discrimination, disproportionately harming Black communities and other communities of color. Because these algorithmic technologies are typically built using societal data that reflects generations of discriminatory practices such as redlining and segregation, they often replicate and reinforce past patterns of discrimination. The tools of the future lock us into the mistakes of the past.
Commercial surveillance and algorithmic decision-making reinforce a separate and unequal society, and correspondingly an unequal market. Each click, habit, and individual characteristic is collected and cataloged to discern not just preferences, but sensitive data about individuals' race, religion, gender, and other traits--or proxies for them. Algorithmic systems use this information to determine what products consumers see, what price or interest rate they are quoted, and what eligibility they qualify for.
Algorithmic collusion and discrimination present a stark market barrier between the promise of what is, and what could be. The harms of algorithmic discrimination are already denying millions of Americans equal opportunity in our economy. Instead of aiding consumers, AI tools too often create distortions in the marketplace, reflecting exclusion rather than fairness for consumers and creating closed doors in the virtual world that have discriminatory effects in real life. Left unchecked, these harmful impacts will continue to grow as AI becomes engrained in every aspect of our lives.
Just as the struggles of the civil rights movement culminated in milestone civil rights laws, so too does this struggle need to culminate in new protections. The time has come for
First, AI regulation should protect Americans' civil rights. Legislation should establish anti-discrimination protections to close gaps in existing law created by novel online and algorithmic contexts. It should also require pre- and post-deployment testing requirements to evaluate algorithmic systems for discrimination and bias. All too often, consumers--especially Black and Brown consumers--end up as unwilling test subjects for unsafe algorithms and consumer technologies. Algorithmic harm needs to be identified, prevented, and mitigated as a fundamental part of the development process.
Second, AI legislation should establish baseline consumer protections. These should include a duty of care to require that products are safe and effective, data privacy to protect consumers' personal information, and transparency and explainability requirements so that consumers know when, how, and why, AI is being used. Consumers should be empowered to make fully informed decisions about how they interact with algorithmic systems and companies should take adequate steps to make sure that their products work as intended.
Third, AI regulation should establish robust oversight and enforcement.
Almost sixty years ago, we decided as a nation that our polity is stronger when everyone has a fair chance.
II. Segregation and Redlining Produced Inequities that Persist Today and Affect the Data Flowing In and Out of Algorithmic Systems
It should be no surprise that when you draw data from a society with a bedrock history of systemic inequity, the data will be steered by that history. Generations of institutionalized oppression of Black Americans--through slavery, segregation, redlining, and disenfranchisement--is an inescapable part of American history whose present-day effects are embedded in the foundation of our society.
Contemporary commercial surveillance practices that undergird modern algorithmic tools originated in this separate-but-equal segregation, which denied equal opportunity to millions of Black people. The analog version of a discriminatory algorithm was redlining, which deprived Black people of intergenerational wealth and health. This one historic algorithmic system--using racial geography as part of a formula for determining government subsidies for homeownership, built on top of segregated housing--has caused a century of devastating downstream effects with no end in sight.
The consequences of residential and educational segregation are still with us.1 Disparities in employment and credit opportunities, and resulting disparities in intergenerational wealth generation, are still endemic.2 Access to healthcare and clean environments is unequal.3 The ongoing consequences of segregation are legion:
1 See, e.g.,
2 See, e.g.,
3 See, e.g.,
* * *
..."investment in construction; urban blight; real estate sales; household loans; small business lending; public school quality; access to transportation; access to banking; access to fresh food; life expectancy; asthma rates; lead paint exposure rates; diabetes rates; heart disease rates; and the list goes on."4
The effects of discrimination are literally shortening the lives and health-spans of Black Americans, manifesting as disproportionate incidences of inflammatory diseases.5 As the
These effects now manifest in data about Black communities and other communities of color--data that will be collected by technology companies, fed into algorithms, and used to make decisions affecting the lives of the people in those communities. Too often this data is used to deny equal opportunities and freedoms.
III. Inheriting the Mistakes of the Past: How Algorithmic Systems Disproportionately Harm and Discriminate Against Black Communities and Other Communities of Color
In a society scaffolded on top of the consequences of institutionalized oppression, algorithmic systems often reproduce discrimination. At the root of algorithmic bias is the reckless, if not knowing or intentional, application of machine learning techniques to massive troves of data drawn from a society blighted by systemic inequity--and the lazy presumption that what came before is what will be. The through-lines for the data are often race, gender, and other immutable traits. When an algorithm executes its mission of creating efficiency by finding hidden correlations, it will often mistake the long-term consequences of discrimination and inequality for an individual's preferences and traits.7 These mistaken shortcuts fail to account for the fact that while a person may be in part a product of their...
4 Leaders of a Beautiful Struggle v. Baltimore Police Dep't, 2 F.4th 330, 349 (4th Cir. 2021) (en banc) (Gregory, C.J., concurring).
5 See, e.g.,
6 Jones v.
7 See generally WHITE HOUSE OFF. OF SCI. & TECH. POL'Y, Blueprint for an AI Bill of Rights 24-25 (2022) [hereinafter Blueprint], https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprintfor-an-AI-Bill-of-Rights.pdf;
* * *
...circumstances, that does not mean they necessarily are or should be limited by those circumstances. Expediency is no excuse for segregation.
The bottom line is that if existing civil rights laws and agency resources were sufficient to address algorithmic discrimination, then the problem would not be pervasive in the first place. As a result of gaps in federal law, individuals currently have limited recourse against discriminatory algorithms and AI models used in commercial settings that reinforce the structural racism and systemic bias that pervade our society. Technology companies can misuse personal data, intentionally or unintentionally, to harm marginalized communities through deception, discrimination, exploitation, and perpetuation of redlining. Absent updated and robust anti-discrimination protections, online businesses may be able to deny service on the basis of race or ethnicity, provide subpar products based on gender or sexual orientation, charge higher rates based on religion, or ignore the accessibility needs of persons with disabilities.
"Just as neighborhoods can serve as a proxy for racial and ethnic identity, there are new worries that big data technologies could be used to 'digitally redline' unwanted groups, either as customers, employees, tenants, or recipients of credit." 8 This dynamic is deeply contrary to cornerstone principles and promises of equal access and a level playing field for everyone. Without strong privacy and online civil rights protections, discrimination will continue to infect the digital marketplace. Not surprisingly, extensive documentation demonstrates that consumers of color continue to receive worse treatment and experience unequal access to goods and services due to discriminatory algorithms and exploitative data practices. These harms are widespread across sectors, including housing, employment, credit, insurance, healthcare, education, retail, and public accommodations (see Appendix I).
In advertising, for example,
8 EXEC. OFF. OF THE PRESIDENT, Big Data: Seizing Opportunities, Preserving Values 53 (2014), https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.p df; see also
9
10
* * *
...advertisers to select which zip codes to include or exclude from receiving an ad, and draw a red line on a map showing the excluded neighborhoods.11 Academic research suggests that Meta also used ad delivery algorithms that reproduce discrimination even when the advertiser did not intend to discriminate, including again in the housing, credit services, and employment contexts.12 Similar practices have been the target of investigations, including at Twitter and Google.13
Retail websites have been found to charge different prices based on the demographics of the user.14 For example, an online shopper's distance from a physical store, as well as distance from the store's competitors, has been used in algorithms setting online prices, resulting in price discrimination. Because of historical redlining and segregation, and the lack of retail options in many low-income neighborhoods, this resulted in low-income communities of color paying higher prices than wealthier, whiter neighborhoods when they shop online.
In housing, algorithmic tools that are used to identify prospective home loan applicants or tenants can cement and reflect centuries of discrimination. For instance, a review of over two million mortgage applications found that Black applicants were 80 percent more likely to be rejected by mortgage approval algorithms when compared with similar white applicants.15 In fact, Black applicants with less debt than white applicants are still more likely to be rejected for a mortgage. Similarly, in 2023, reporters discovered that the algorithmic scoring system used by
...https://www.washingtonpost.com/business/2019/03/28/hud-charges-facebook-with-housingdiscrimination/.
11 "[
12
13 Id.
14
15
* * *
...the
Discrimination in the insurance market is also common. Scoring algorithms used by auto insurers judge applicants "less on driving habits and increasingly on socioeconomic factors," 17 resulting in higher rates and fewer options for residents of majority Black neighborhoods.18 These disparities are so significant that, in some instances, insurers charge rates more than 30 percent higher in Black and Brown neighborhoods, regardless of neighborhood affluence.19 In another case, Allstate attempted to use a personalized pricing algorithm in
Similarly, algorithmic tools used in consumer financial markets often determine who can access a loan or credit based on a consumer's identity.22 In 2020, at a time of historically low interest rates and an opportunity to lock in the ability to build long-term home equity, Wells Fargo's algorithms racially discriminated in mortgage refinancing, rejecting over half of Black applicants, while approving over...
16
17 CONSUMER REPS.,
18 See
19
20
21 See
22 See
* * *
...70 percent of white applicants.23 Even when consumers of color are able to access financial services, they are often charged higher rates as a result of "algorithmic strategic pricing." One study found that the use of such tools resulted in Black and Latino borrowers paying higher interest rates on home purchase and refinance loans when compared to similar white borrowers. The difference alone costs Black and Latino customers
These harms occur primarily in three ways. First, a company holding personal data uses them to directly discriminate against people of color or other marginalized groups; second, a company holding personal data makes them available to other actors who use them to discriminate; or third, a company designs its data processing practices in a manner that negligently, recklessly, or knowingly causes discriminatory or otherwise harmful results--e.g., algorithmic bias or promotion of disinformation. But the bottom line is that if these companies and data brokers were not collecting, aggregating, and using vast quantities of personal data in privacyinvasive ways in the first place, many of these harms would not happen or would be far more difficult to execute.25
The common denominator in these examples is the sloppy or abusive use of personal data and algorithmic tools. By prohibiting algorithmic discrimination, mandating data minimization and other privacy protections, and requiring companies to test and prove that their algorithms are safe and effective, many harms can be prevented or mitigated.
IV. Consumers Cannot Reasonably Avoid the Harms from Opaque, Discriminatory Algorithms; The Act of Avoidance is Itself Harmful.
23
24
25 See
* * *
Consumers are unable to avoid the risk of substantial injury posed by algorithmic systems because their operations are opaque to consumers. Although algorithmic discrimination is widespread, many consumers who are harmed are often unaware that they have been impacted by an algorithm in the first place. Even when consumers are aware that they may have been affected by an algorithm, there is often little transparency about how an algorithm made a decision about a given opportunity or service. Low-income consumers, in particular, may lack the resources or opportunities to fight or avoid exploitative practices or products. It is unrealistic and unfair to expect consumers to avoid algorithmic systems when they do not know they have been subjected to it or how the decision affected them.
Due to the "black box" nature of many algorithmic systems, consumers cannot reasonably avoid the harms of discrimination. As
Discrimination in this context is not a product feature touted on a box and weighed in the aisle of a marketplace. In the context of algorithmic systems, consumers typically have no way of knowing what factors a firm uses to make decisions about the opportunities, products, or services offered to the consumer and no way to discern which firms are discriminating and which are not. Moreover, there often are intermediary firms, service providers, or other third parties in between the consumer and the opportunity--such as advertising networks or assessment tools for prospective employees--and those intermediaries may engage in discrimination.
26 See F.T.C., Joint Statement of Chair
27 See Boynton,
* * *
But even if a consumer knows that an algorithm discriminates against them, they may be unable to avoid using it. For example, someone seeking housing, employment, insurance, or credit may have no choice but to submit to an automated decision-making tool even if they know it is unfair.28 Or due to market concentration, a consumer may have little or no access to online services without subjecting themselves to discrimination. The
When a firm imposes a greater burden on some people to access opportunities because of their protected characteristics, the additional time, money, effort, or humiliation to overcome that hurdle is an injury.30 The "imposition of a barrier" creates "the inability to compete on equal footing."31 Thus, even if alternative services are available--and they are equal--it is inherently unjust and unfair to require consumers to avoid the harm. An individual cannot reasonably avoid discrimination because the very act of avoidance itself is a form of segregation that causes a substantial injury.
This is why the
28 See supra Sec. III.
29 F.T.C., A Look At What ISPs Know About You: Examining the Privacy Practices of Six Major Internet Service Providers iii (
30 See, e.g., Heckler, 465 U.S. at 740.
31 Ne. Fla. Chapter of Ass'n Gen. Contractors of Am. v.
32 Blueprint.
33 Exec. Order No. 14091, 88 Fed. Reg. 10825 (
34 Exec. Order No. 14110, 88 Fed. Reg. 75191 (
35 OFF. OF MGMT. & BUDGET, EXEC. OFF. OF THE PRESIDENT, Proposed Memorandum, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (
* * *
V. Solutions
The persistence and proliferation of such discriminatory conduct highlights the need for further action. To address these harms,
First, AI regulation must seriously address algorithmic bias and discrimination through bright line rules and effective examination of the technology deployed. Legislation should include specific anti-discrimination provisions to prohibit algorithmic discrimination, including disparate impact. These should include narrow but reasonable exceptions for self-testing to prevent or mitigate bias and for diversifying a consumer pool through outreach to underserved communities. The anti-discrimination provision from the bipartisan American Data Privacy and Protection Act, which passed out of the
Second, AI should be evaluated and assessed, both before and after deployment, for discrimination, bias, and other harms. Legislation should require developers and deployers to engage an independent auditor to evaluate the algorithm's design, how it makes or contributes to decisions about significant life opportunities, how the algorithm might produce harm and how that harm can be mitigated. Deployers should then annually assess the algorithm as it is used, detailing any changes in its use or any harms it produces, including measuring disparate impacts. Developers should review these assessments to determine if the algorithm needs modifications, and the evaluations, assessments, and reviews should be publicly shared and reported to a federal regulator. Sunlight is the best disinfectant.37 Reviewing algorithms in both the design and deployment phases proactively detects and prevents harm and promotes responsible innovation.
Third, Developers and deployers of AI should have a duty of care requiring that the products they offer are safe and effective and be liable if they aren't. An algorithm is safe if it is evaluated by a pre-deployment assessment, reasonable steps are taken to prevent it from causing harm, and its use is not unfair or deceptive. An algorithm is effective if it functions as expected, intended, and publicly advertised. Legislation should also prohibit a developer or deployer from engaging in deceptive marketing, off-label uses, and abnormally dangerous activities. Establishing a duty...
36 See American Data Privacy and Protection Act ("ADPPA"), H.R. 8152, 117th Cong. Sec. 207(a) (2022), https://www.congress.gov/bill/117th-congress/house-bill/8152/text/rh.
37 See
* * *
...of care ensures that companies must take adequate steps to protect consumers and make sure that their products work as intended.
Fourth, AI regulation should include transparency and explainability requirements so that consumers know when, how, and why a company is using AI and how it affects them. Companies must provide individuals with easy-tounderstand notices about whether and how an algorithmic system affects their rights. A regulator should be empowered to write rules for when and how a company needs to provide individualized explanations and rights to appeal decisions informed by AI. Companies also need to publish annual reports about their impact assessments anddata practices. Without public transparency, individuals cannot make informed decisions about how they interact with algorithmic systems and are unable to seek redress when harm occurs.
Fifth, there should be data protection requirements, so that AI is not trained on the data of people who have not consented to it and to safeguard consumer's privacy. Training and testing AI to make it fair and unbiased requires not just a lot of personal data, but a lot of highly sensitive personal data, like race and sex information. For consumers to be willing to share that information, they need to be able to trust that it will not be misused for secondary purposes and that will be kept secure. Developers and deployers should be required to collect and use only as much personal data as is reasonably necessary and proportionate to provide the services that consumers expect, and to safeguard that data. Developers should have an additional requirement to get affirmative express consent to use personal data to train algorithms. Individuals also should be able to access, correct, and delete their ersonal data. These data protection requirements are necessary to enable individuals to safely share their sensitive personal information without fear.
Sixth, legislation should establish robust oversight and multiple levels of enforcement. This must include a private right of action. Individuals need to be able to vindicate their own rights in court because, historically, Black people and other people of color could not rely on government agencies to defend their rights. There should also be enforcement by state attorneys general and a federal agency. The federal agency needs regulatory authority as well to effectively regulate and mandate compliance with technical aspects of AI regulation, like auditing and transparency. Purveyors of algorithmic systems infringing people's rights should not be immune from liability.
VI. Conclusion
While the threats of AI are often described as matters of futuristic science fiction, algorithmic tools are already harming Black people and other communities of color every day. As Vice President
38
38 SPEECHES AND REMARKS,
* * *
Original text here: https://www.judiciary.senate.gov/imo/media/doc/2023-12-13_pm_-_testimony_-_hewitt.pdf
Another year of sure-fire predictions [The Orange County Register]
Section 184 loans increase homeownership opportunities Submitted by Bay Bank
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News