Senate Judiciary Subcommittee Issues Testimony From Lawyers' Committee for Civil Rights Under Law President Hewitt - Insurance News | InsuranceNewsNet

InsuranceNewsNet — Your Industry. One Source.™

Sign in
  • Subscribe
  • About
  • Advertise
  • Contact
Home Now reading Newswires
Topics
    • Advisor News
    • Annuity Index
    • Annuity News
    • Companies
    • Earnings
    • Fiduciary
    • From the Field: Expert Insights
    • Health/Employee Benefits
    • Insurance & Financial Fraud
    • INN Magazine
    • Insiders Only
    • Life Insurance News
    • Newswires
    • Property and Casualty
    • Regulation News
    • Sponsored Articles
    • Washington Wire
    • Videos
    • ———
    • About
    • Advertise
    • Contact
    • Editorial Staff
    • Newsletters
  • Exclusives
  • NewsWires
  • Magazine
  • Newsletters
Sign in or register to be an INNsider.
  • AdvisorNews
  • Annuity News
  • Companies
  • Earnings
  • Fiduciary
  • Health/Employee Benefits
  • Insurance & Financial Fraud
  • INN Exclusives
  • INN Magazine
  • Insurtech
  • Life Insurance News
  • Newswires
  • Property and Casualty
  • Regulation News
  • Sponsored Articles
  • Video
  • Washington Wire
  • Life Insurance
  • Annuities
  • Advisor
  • Health/Benefits
  • Property & Casualty
  • Insurtech
  • About
  • Advertise
  • Contact
  • Editorial Staff

Get Social

  • Facebook
  • X
  • LinkedIn
Newswires
Newswires RSS Get our newsletter
Order Prints
December 31, 2023 Newswires
Share
Share
Tweet
Email

Senate Judiciary Subcommittee Issues Testimony From Lawyers' Committee for Civil Rights Under Law President Hewitt

Targeted News Service

WASHINGTON, Dec. 31 -- The Senate Judiciary Subcommittee on Competition Policy, Antitrust, and Consumer Rights issued the following testimony by Damon T. Hewitt, president and executive director of the Lawyers' Committee for Civil Rights Under Law, involving a hearing on Dec. 13, 2023, entitled "The New Invisible Hand? The Impact of Algorithms on Competition and Consumer Rights":

* * *

I. Introduction

Chair Klobuchar, Ranking Member Lee, and Members of the Subcommittee, thank you for the opportunity to testify today about the impact of algorithms on consumer and civil rights. My name is Damon T. Hewitt, and I am the President and Executive Director of the Lawyers' Committee for Civil Rights Under Law ("Lawyers' Committee").

The Lawyers' Committee is a nonpartisan, nonprofit organization, formed in 1963 at the request of President John F. Kennedy to mobilize the nation's leading lawyers as agents for change in the Civil Rights Movement. Today, we use legal advocacy to achieve racial justice, fighting inside and outside the courts to ensure that Black people and other people of color have voice, opportunity, and power to make the promises of our democracy real. The Lawyers' Committee works at the intersection of racial justice, privacy, and technology to address predatory data practices, discriminatory algorithms, and other online harms that disproportionately affect people of color.

Equal opportunity and civil rights are intertwined with technological advancement. In the consumer context, algorithms are used to make decisions about all aspects of peoples' lives, determining who can rent a house, who can get a loan, who can get a deal, and consequentially--who cannot. One of the greatest civil rights challenges of our generation is to ensure that our new data-driven economy does not replicate or amplify existing discrimination. To ensure that technology serves all of us. But achieving the full measure of freedom in a data-driven economy also requires freedom from discrimination, which is increasingly amplified online through algorithmic bias, digital redlining, and pervasive surveillance.

Although algorithmic systems are widely used, they pose a high risk of discrimination, disproportionately harming Black communities and other communities of color. Because these algorithmic technologies are typically built using societal data that reflects generations of discriminatory practices such as redlining and segregation, they often replicate and reinforce past patterns of discrimination. The tools of the future lock us into the mistakes of the past.

Commercial surveillance and algorithmic decision-making reinforce a separate and unequal society, and correspondingly an unequal market. Each click, habit, and individual characteristic is collected and cataloged to discern not just preferences, but sensitive data about individuals' race, religion, gender, and other traits--or proxies for them. Algorithmic systems use this information to determine what products consumers see, what price or interest rate they are quoted, and what eligibility they qualify for.

Algorithmic collusion and discrimination present a stark market barrier between the promise of what is, and what could be. The harms of algorithmic discrimination are already denying millions of Americans equal opportunity in our economy. Instead of aiding consumers, AI tools too often create distortions in the marketplace, reflecting exclusion rather than fairness for consumers and creating closed doors in the virtual world that have discriminatory effects in real life. Left unchecked, these harmful impacts will continue to grow as AI becomes engrained in every aspect of our lives.

Just as the struggles of the civil rights movement culminated in milestone civil rights laws, so too does this struggle need to culminate in new protections. The time has come for Congress to enact legislation ensuring the algorithmic systems are safe, effective, and fair. Legislation needs to center civil rights, establish baseline protections for consumers and their privacy, and empower effective oversight.

First, AI regulation should protect Americans' civil rights. Legislation should establish anti-discrimination protections to close gaps in existing law created by novel online and algorithmic contexts. It should also require pre- and post-deployment testing requirements to evaluate algorithmic systems for discrimination and bias. All too often, consumers--especially Black and Brown consumers--end up as unwilling test subjects for unsafe algorithms and consumer technologies. Algorithmic harm needs to be identified, prevented, and mitigated as a fundamental part of the development process.

Second, AI legislation should establish baseline consumer protections. These should include a duty of care to require that products are safe and effective, data privacy to protect consumers' personal information, and transparency and explainability requirements so that consumers know when, how, and why, AI is being used. Consumers should be empowered to make fully informed decisions about how they interact with algorithmic systems and companies should take adequate steps to make sure that their products work as intended.

Third, AI regulation should establish robust oversight and enforcement. Congress should empower a federal regulator with adequate authority and resources and provide a private right of action to remedy algorithmic harms. Black people and other people of color historically have not been able to rely upon the government to protect their interests, so individuals need to be able to vindicate their own rights.

Almost sixty years ago, we decided as a nation that our polity is stronger when everyone has a fair chance. Congress passed the Civil Rights Act of 1964 to prohibit segregation in interstate commerce, alongside other legislation to address discrimination in employment, housing, and other critical aspects of Americans' lived experiences in the marketplace. Today, the mass generation and collection of personal data through the internet and the use of algorithmic technologies to determine 4 economic opportunities create new challenges to the prevention and elimination of discrimination. It is time to build upon our civil rights infrastructure to ensure that everyone has equal opportunity in the new digital marketplace and fair access to the information, goods, and services it enables.

II. Segregation and Redlining Produced Inequities that Persist Today and Affect the Data Flowing In and Out of Algorithmic Systems

It should be no surprise that when you draw data from a society with a bedrock history of systemic inequity, the data will be steered by that history. Generations of institutionalized oppression of Black Americans--through slavery, segregation, redlining, and disenfranchisement--is an inescapable part of American history whose present-day effects are embedded in the foundation of our society.

Contemporary commercial surveillance practices that undergird modern algorithmic tools originated in this separate-but-equal segregation, which denied equal opportunity to millions of Black people. The analog version of a discriminatory algorithm was redlining, which deprived Black people of intergenerational wealth and health. This one historic algorithmic system--using racial geography as part of a formula for determining government subsidies for homeownership, built on top of segregated housing--has caused a century of devastating downstream effects with no end in sight.

The consequences of residential and educational segregation are still with us.1 Disparities in employment and credit opportunities, and resulting disparities in intergenerational wealth generation, are still endemic.2 Access to healthcare and clean environments is unequal.3 The ongoing consequences of segregation are legion:

1 See, e.g., Patrick Sharkey, Stuck in Place: Urban Neighborhoods and the End of Progress Toward Racial Equality 9-10, 91-116 (Univ. of Chicago Press 2013); Roslyn Arlin Mickelson, Exploring the School-Housing Nexus: A Synthesis of Social Science Evidence, in Finding Common Ground: Coordinating Housing and Education Policy to Promote Integration 5 & n.1 (Philip Tegeler ed., Poverty & Race Rsch. Action Council & Nat'l Coal. on Sch. Diversity 2011), http://www.prrac.org/pdf/HousingEducationReport-October2011.pdf.

2 See, e.g., Michael A. Stoll, Job Sprawl and the Spatial Mismatch between Blacks and Jobs, Brookings Inst. Metro. Pol'y Program 7 (Feb. 2005), https://www.brookings.edu/research/job-sprawland-the-spatial-mismatch-between-blacks-and-jobs/; Rashawn Ray et al, Homeownership, Racial Segregation, and Policy Solutions to Racial Wealth Equity, Brookings (Sept. 1, 2021), https://www.brookings.edu/articles/homeownership-racial-segregation-and-policies-for-racial-wealthequity/.

3 See, e.g., Sherman A. James, John Henryism and the Health of African-Americans, 18 Culture, Med., & Psych. 163-82 (1994), https://doi.org/10.1007/BF01379448; Ctr. for Fam. Rsch., Skin-deep Resilience Research Digest, Univ. of Ga., https://cfr.uga.edu/for-researchers/research-digests/skindeep-resilience/; Helen H. Kang, Pursuing Environmental Justice: Obstacles and Opportunities - Lessons from the Field, 31 Wash. U. J. L. & Pol'y 121, 126-67 (2009); U.S. ENV'T. PROT. AGENCY, Environmental Equity: Reducing Risk for All Communities 15 (1992), https://www.epa.gov/sites/default/files/2015-02/documents/reducing_risk_com_vol1.pdf.

* * *

..."investment in construction; urban blight; real estate sales; household loans; small business lending; public school quality; access to transportation; access to banking; access to fresh food; life expectancy; asthma rates; lead paint exposure rates; diabetes rates; heart disease rates; and the list goes on."4

The effects of discrimination are literally shortening the lives and health-spans of Black Americans, manifesting as disproportionate incidences of inflammatory diseases.5 As the Supreme Court has held, destroying the badges and incidents of slavery "at the very least" necessitates "the freedom to buy whatever a white man can buy, the right to live wherever a white man can live."6

These effects now manifest in data about Black communities and other communities of color--data that will be collected by technology companies, fed into algorithms, and used to make decisions affecting the lives of the people in those communities. Too often this data is used to deny equal opportunities and freedoms.

III. Inheriting the Mistakes of the Past: How Algorithmic Systems Disproportionately Harm and Discriminate Against Black Communities and Other Communities of Color

In a society scaffolded on top of the consequences of institutionalized oppression, algorithmic systems often reproduce discrimination. At the root of algorithmic bias is the reckless, if not knowing or intentional, application of machine learning techniques to massive troves of data drawn from a society blighted by systemic inequity--and the lazy presumption that what came before is what will be. The through-lines for the data are often race, gender, and other immutable traits. When an algorithm executes its mission of creating efficiency by finding hidden correlations, it will often mistake the long-term consequences of discrimination and inequality for an individual's preferences and traits.7 These mistaken shortcuts fail to account for the fact that while a person may be in part a product of their...

4 Leaders of a Beautiful Struggle v. Baltimore Police Dep't, 2 F.4th 330, 349 (4th Cir. 2021) (en banc) (Gregory, C.J., concurring).

5 See, e.g., Sherman A. James, John Henryism and the Health of African-Americans, 18 Culture, Med., & Psych. 163-82 (1994), https://doi.org/10.1007/BF01379448; CTR. FOR FAM. RSCH., Skin-deep Resilience Research Digest, Univ. of Ga., https://cfr.uga.edu/for-researchers/research-digests/skindeep-resilience/.

6 Jones v. Alfred H. Mayer Co., 392 U.S. 409, 443 (1968) (cleaned up).

7 See generally WHITE HOUSE OFF. OF SCI. & TECH. POL'Y, Blueprint for an AI Bill of Rights 24-25 (2022) [hereinafter Blueprint], https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprintfor-an-AI-Bill-of-Rights.pdf; Jane Chung, Racism In, Racism Out: A Primer on Algorithmic Racism, PUBLIC CITIZEN (2022), https://www.citizen.org/article/algorithmic-racism/; Yeshimabeit Milner & Amy Traub, Data Capitalism and Algorithmic Racism, Data for Black Lives and Demos (2021), https://www.demos.org/sites/default/files/202105/Demos_%20D4BL_Data_Capitalism_Algorithmic_Racism.pdf.

* * *

...circumstances, that does not mean they necessarily are or should be limited by those circumstances. Expediency is no excuse for segregation.

The bottom line is that if existing civil rights laws and agency resources were sufficient to address algorithmic discrimination, then the problem would not be pervasive in the first place. As a result of gaps in federal law, individuals currently have limited recourse against discriminatory algorithms and AI models used in commercial settings that reinforce the structural racism and systemic bias that pervade our society. Technology companies can misuse personal data, intentionally or unintentionally, to harm marginalized communities through deception, discrimination, exploitation, and perpetuation of redlining. Absent updated and robust anti-discrimination protections, online businesses may be able to deny service on the basis of race or ethnicity, provide subpar products based on gender or sexual orientation, charge higher rates based on religion, or ignore the accessibility needs of persons with disabilities.

"Just as neighborhoods can serve as a proxy for racial and ethnic identity, there are new worries that big data technologies could be used to 'digitally redline' unwanted groups, either as customers, employees, tenants, or recipients of credit." 8 This dynamic is deeply contrary to cornerstone principles and promises of equal access and a level playing field for everyone. Without strong privacy and online civil rights protections, discrimination will continue to infect the digital marketplace. Not surprisingly, extensive documentation demonstrates that consumers of color continue to receive worse treatment and experience unequal access to goods and services due to discriminatory algorithms and exploitative data practices. These harms are widespread across sectors, including housing, employment, credit, insurance, healthcare, education, retail, and public accommodations (see Appendix I).

In advertising, for example, Facebook (now known as Meta) allowed discrimination in the targeting and delivery of advertising for housing, credit services, and job openings based on race, sex, and age. The company was eventually forced to change its advertising targeting system as part of a legal settlement,9 but was still charged with engaging in race discrimination by the Department of Housing and Urban Development.10 In fact, Meta literally engaged in redlining--it allowed...

8 EXEC. OFF. OF THE PRESIDENT, Big Data: Seizing Opportunities, Preserving Values 53 (2014), https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.p df; see also FEDERAL TRADE COMMISSION (F.T.C), Big Data: A Tool for Inclusion or Exclusion? (Jan. 2016), https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusionunderstanding-issues/160106big-data-rpt.pdf.

9 Barbara Ortutay, Facebook to overhaul ad targeting to prevent discrimination, ASSOCIATED PRESS, March 19, 2019, https://www.apnews.com/38c0dbd8acb14e3fbc7911ea18fafd58.

10 Tracy Jan & Elizabeth Dwoskin, HUD is reviewing Twitter's and Google's ad practices as part of housing discrimination probe, WASH. POST (Mar. 28, 2019),

* * *

...advertisers to select which zip codes to include or exclude from receiving an ad, and draw a red line on a map showing the excluded neighborhoods.11 Academic research suggests that Meta also used ad delivery algorithms that reproduce discrimination even when the advertiser did not intend to discriminate, including again in the housing, credit services, and employment contexts.12 Similar practices have been the target of investigations, including at Twitter and Google.13

Retail websites have been found to charge different prices based on the demographics of the user.14 For example, an online shopper's distance from a physical store, as well as distance from the store's competitors, has been used in algorithms setting online prices, resulting in price discrimination. Because of historical redlining and segregation, and the lack of retail options in many low-income neighborhoods, this resulted in low-income communities of color paying higher prices than wealthier, whiter neighborhoods when they shop online.

In housing, algorithmic tools that are used to identify prospective home loan applicants or tenants can cement and reflect centuries of discrimination. For instance, a review of over two million mortgage applications found that Black applicants were 80 percent more likely to be rejected by mortgage approval algorithms when compared with similar white applicants.15 In fact, Black applicants with less debt than white applicants are still more likely to be rejected for a mortgage. Similarly, in 2023, reporters discovered that the algorithmic scoring system used by

...https://www.washingtonpost.com/business/2019/03/28/hud-charges-facebook-with-housingdiscrimination/.

11 "[Facebook] has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area." Charge of Discrimination, U.S. Dept. of Hous. and Urban Dev. v. Facebook, Inc., FHEO No. 01-18-0323-8 at 4 (Mar. 28, 2019), https://www.hud.gov/sites/dfiles/Main/documents/HUD_v_Facebook.pdf; Brief of Amicus Curiae Lawyers' Committee for Civil Rights Under Law in Support of Plaintiff's Opposition to Facebook's Demurrer to First Amended Complaint, Liapes v. Facebook, No. 30-CIV-01712, at 10 (Calif. Super. Ct. 2021), https://lawyerscommittee.org/wp-content/uploads/2021/03/Leave-and-AmicusCombined.pdf.

12 Louise Matsakis, Facebook's Ad System Might be Hard-Coded for Discrimination, WIRED, Apr. 6, 2019, https://www.wired.com/story/facebooks-ad-system-discrimination/.

13 Id.

14 Jennifer Valentino-DeVries et al, Websites Vary Prices, Deals Based on Users' Information, WALL ST. J., Dec. 24, 2012, https://www.wsj.com/articles/SB10001424127887323777204578189391813881534; Alex P. Miller & Kartik Hosanagar, How Targeted Ads and Dynamic Pricing Can Perpetuate Bias, HARV. BUS. REVIEW (Nov. 8, 2019), https://hbr.org/2019/11/how-targeted-ads-and-dynamic-pricing-canperpetuate-bias).

15 Emmanuel Martinez & Lauren Kirchner, The Secret Bias Hidden in Mortgage-Approval Algorithms, THE MARKUP (Aug. 25, 2021), https://themarkup.org/denied/2021/08/25/the-secret-biashidden-in-mortgage-approval-algorithms.

* * *

...the Los Angeles Homeless Services Authority discriminated against Black and Latino people, giving white applicants higher priority in the agency's housing system.16

Discrimination in the insurance market is also common. Scoring algorithms used by auto insurers judge applicants "less on driving habits and increasingly on socioeconomic factors," 17 resulting in higher rates and fewer options for residents of majority Black neighborhoods.18 These disparities are so significant that, in some instances, insurers charge rates more than 30 percent higher in Black and Brown neighborhoods, regardless of neighborhood affluence.19 In another case, Allstate attempted to use a personalized pricing algorithm in Prince George's County, Maryland, which the state rejected as discriminatory. The algorithm would have charged consumers more if they were unlikely to switch to another car insurance company,20 contributing to discriminatory higher premiums routinely paid by consumers of color who often lack competitive options for insurance. Despite these concerns, the Allstate personalized pricing algorithm was still implemented in other states.21

Similarly, algorithmic tools used in consumer financial markets often determine who can access a loan or credit based on a consumer's identity.22 In 2020, at a time of historically low interest rates and an opportunity to lock in the ability to build long-term home equity, Wells Fargo's algorithms racially discriminated in mortgage refinancing, rejecting over half of Black applicants, while approving over...

16 Colin Lecher & Maddy Venrer, Black and Latino Homeless People Rank Lower on L.A.'s Housing Priority List, L.A. TIMES (Feb. 28, 2023), https://www.latimes.com/california/story/2023-02-28/blacklatino-homeless-people-housing-priority-list-los-angeles.

17 CONSUMER REPS., The Truth About Car Insurance (July 30, 2015), https://www.consumerreports.org/cro/car-insurance/auto-insurance-special-report/index.htm.

18 See Douglass Heller, Auto Insurance: A National Issue of Economic Justice, CONSUMER FED'N OF AM. (Jan. 2019), https://consumerfed.org/wp-content/uploads/2020/01/Summary-of-Auto-InsuranceResearch.pdf; Kaveh Waddell, Why Your Education and Job Could Mean You're Paying Too Much for Car Insurance, CONSUMER REPS. (Jan. 28, 2021), https://www.consumerreports.org/carinsurance/why-your-education-and-job-could-mean-youre-paying-too-much-for-car-insurancea3116553820/.

19 Julia Angwin et al., Minority Neighborhoods Pay Higher Car Insurance Premiums Than White Areas With the Same Risk, PROPUBLICA (Apr. 5, 2017), https://www.propublica.org/article/minorityneighborhoods-higher-car-insurance-premiums-white-areas-same-risk.

20 Maddy Varner & Aaron Sankin, Suckers List: How Allstate's Secret Auto Insurance Algorithm Squeezes Big Spenders, THE MARKUP (Feb. 25, 2020), https://themarkup.org/allstatesalgorithm/2020/02/25/car-insurance-suckers-list.

21 See Aaron Sankin, Michigan Regulators Question Allstate's Car Insurance Pricing, THE MARKUP (Feb. 9, 2021), https://themarkup.org/allstates-algorithm/2021/02/09/michigan-regulators-questionallstates-car-insurance-pricing; Aaron Sankin, Newly Public Documents Allege Allstate Overcharged Loyal California Customers $1 billion, THE MARKUP (Feb. 1, 2022), https://themarkup.org/allstatesalgorithm/2022/02/01/newly-public-documents-allege-allstate-overcharged-loyal-californiacustomers-1-billion.

22 See Bertrand K. Hassani, Societal Bias reinforcement through machine learning: a credit scoring perspective, 1 AI & Ethics 239 (2020), https://link.springer.com/article/10.1007/s43681-020-00026-z.

* * *

...70 percent of white applicants.23 Even when consumers of color are able to access financial services, they are often charged higher rates as a result of "algorithmic strategic pricing." One study found that the use of such tools resulted in Black and Latino borrowers paying higher interest rates on home purchase and refinance loans when compared to similar white borrowers. The difference alone costs Black and Latino customers $250 to $500 million every year.24

These harms occur primarily in three ways. First, a company holding personal data uses them to directly discriminate against people of color or other marginalized groups; second, a company holding personal data makes them available to other actors who use them to discriminate; or third, a company designs its data processing practices in a manner that negligently, recklessly, or knowingly causes discriminatory or otherwise harmful results--e.g., algorithmic bias or promotion of disinformation. But the bottom line is that if these companies and data brokers were not collecting, aggregating, and using vast quantities of personal data in privacyinvasive ways in the first place, many of these harms would not happen or would be far more difficult to execute.25

The common denominator in these examples is the sloppy or abusive use of personal data and algorithmic tools. By prohibiting algorithmic discrimination, mandating data minimization and other privacy protections, and requiring companies to test and prove that their algorithms are safe and effective, many harms can be prevented or mitigated.

IV. Consumers Cannot Reasonably Avoid the Harms from Opaque, Discriminatory Algorithms; The Act of Avoidance is Itself Harmful.

23 Shawn Donnan et al., Wells Fargo Rejected Half Its Black Applicants in Mortgage Refinancing Boom, BLOOMBERG (Mar. 11, 2022), https://www.bloomberg.com/graphics/2022-wells-fargo-blackhome-loan-refinancing; see also Emily Flitter, A Black homeowner is suing Wells Fargo, claiming discrimination, N.Y. TIMES (Mar. 21, 2022), https://www.nytimes.com/2022/03/21/business/wellsfargo-mortgages-discrimination-suit.html.

24 Laura Counts, Minority homebuyers face widespread statistical lending discrimination, study finds, UNIV. OF CALIF. BERKELEY HAAS SCH. OF BUS. (Nov. 13, 2018), https://newsroom.haas.berkeley.edu/minority-homebuyers-face-widespread-statistical-lendingdiscrimination-study-finds/.

25 See Valerie Schneider, Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice, 52.1 Colum. Hum. Rts. L. Rev. 251, 254 (2020), https://blogs.law.columbia.edu/hrlr/files/2020/11/251_Schneider.pdf; James A. Allen, The Color of Algorithms: An Analysis and Proposed Research Agenda for Deterring Algorithmic Redlining, 46 Fordham Urb. L.J. 219, 229 (2019); Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda, 41 Hum. Rts. Q. 1, 11 (2019); EXEC. OFF. OF THE PRESIDENT, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights 1, 5 (2016), https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discriminat ion.pdf.

* * *

Consumers are unable to avoid the risk of substantial injury posed by algorithmic systems because their operations are opaque to consumers. Although algorithmic discrimination is widespread, many consumers who are harmed are often unaware that they have been impacted by an algorithm in the first place. Even when consumers are aware that they may have been affected by an algorithm, there is often little transparency about how an algorithm made a decision about a given opportunity or service. Low-income consumers, in particular, may lack the resources or opportunities to fight or avoid exploitative practices or products. It is unrealistic and unfair to expect consumers to avoid algorithmic systems when they do not know they have been subjected to it or how the decision affected them.

Due to the "black box" nature of many algorithmic systems, consumers cannot reasonably avoid the harms of discrimination. As FTC leaders have noted in recent enforcement actions, business practices that cause substantial injury to consumers on the basis of immutable characteristics such as race are not reasonably avoidably and are not outweighed by countervailing benefits.26 The FTC's actions to combat discrimination with consumer protection law are well-grounded in decades of civil rights case law. Unfair and deceptive practices statutes have a long history in the struggle for civil rights. For example, such a provision in the Interstate Commerce Act was used to desegregate bus terminals and railroads, including the Supreme Court's landmark 1960 decision in Boynton v. Virginia that catalyzed the Freedom Rides.27

Discrimination in this context is not a product feature touted on a box and weighed in the aisle of a marketplace. In the context of algorithmic systems, consumers typically have no way of knowing what factors a firm uses to make decisions about the opportunities, products, or services offered to the consumer and no way to discern which firms are discriminating and which are not. Moreover, there often are intermediary firms, service providers, or other third parties in between the consumer and the opportunity--such as advertising networks or assessment tools for prospective employees--and those intermediaries may engage in discrimination.

26 See F.T.C., Joint Statement of Chair Lina M. Khan, Commissioner Rebecca Kelly Slaughter, and Commissioner Alvaro M. Bedoya In the Matter of Passport Auto. Grp., F.T.C. File No. 202 3199, at 3 (Oct. 18, 2022) (auto sales), https://www.ftc.gov/system/files/ftc_gov/pdf/joint-statement-of-chair-linam.-khan-commissioner-rebecca-kelly-slaughter-and-commissioner-alvaro-m.-bedoya-in-the-matter-ofpassport-auto-group.pdf. In the past, the F.T.C. has also addressed deceptive advertisements related to housing discrimination. See In re E.G. Reinsch, Inc., 75 F.T.C. 210 (1969); In re First Buckingham Cmty., Inc., 73 F.T.C. 938 (1968).

27 See Boynton, 364 U.S. 454 (1960) (bus terminal segregation); Henderson v. United States, 339 U.S. 816 (1950) (dining car segregation); Mitchell v. United States, 313 U.S. 80 (1941) (railcar segregation); Keys v. Carolina Coach Co., 64 M.C.C. 769 (Interstate Commerce Commission 1955) (bus segregation). See also Orloff v. F.C.C., 352 F.3d 415, 420 (D.C. Cir. 2003) (Communications Act of 1934 prohibits race and income discrimination as unjust and unreasonable practices).

* * *

But even if a consumer knows that an algorithm discriminates against them, they may be unable to avoid using it. For example, someone seeking housing, employment, insurance, or credit may have no choice but to submit to an automated decision-making tool even if they know it is unfair.28 Or due to market concentration, a consumer may have little or no access to online services without subjecting themselves to discrimination. The FTC recently analyzed the data practices of the six largest internet service providers and found that many "allo[w] advertisers to target consumers by their race, ethnicity, sexual orientation, economic status, political affiliations, or religious beliefs."29

When a firm imposes a greater burden on some people to access opportunities because of their protected characteristics, the additional time, money, effort, or humiliation to overcome that hurdle is an injury.30 The "imposition of a barrier" creates "the inability to compete on equal footing."31 Thus, even if alternative services are available--and they are equal--it is inherently unjust and unfair to require consumers to avoid the harm. An individual cannot reasonably avoid discrimination because the very act of avoidance itself is a form of segregation that causes a substantial injury.

This is why the Biden-Harris Administration has already taken a series of actions to mitigate the risks of AI, including by outlining key principles for advancing civil rights and equity in the Blueprint for an AI Bill of Rights,32 Executive Order 14091 ("Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government"),33 Executive Order 14110 ("Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"), 34 and the Office of Management and Budget's proposed memorandum on agency use of AI.35 Together, these actions direct agencies across the federal government to use their existing authorities to prevent and remedy algorithmic discrimination. While these measures are crucial for protecting consumers from the harms of discriminatory algorithms, it is now time for Congress to take the next step and enact legislation.

28 See supra Sec. III.

29 F.T.C., A Look At What ISPs Know About You: Examining the Privacy Practices of Six Major Internet Service Providers iii (Oct. 21, 2021), https://www.ftc.gov/system/files/documents/reports/lookwhat-isps-know-about-you-examining-privacy-practices-six-major-internet-serviceproviders/p195402_isp_6b_staff_report.pdf.

30 See, e.g., Heckler, 465 U.S. at 740.

31 Ne. Fla. Chapter of Ass'n Gen. Contractors of Am. v. City of Jacksonville, Fla., 508 U.S. 656, 666 (1993).

32 Blueprint.

33 Exec. Order No. 14091, 88 Fed. Reg. 10825 (Feb. 16, 2023).

34 Exec. Order No. 14110, 88 Fed. Reg. 75191 (Oct. 30, 2023).

35 OFF. OF MGMT. & BUDGET, EXEC. OFF. OF THE PRESIDENT, Proposed Memorandum, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (Nov. 1, 2023), https://ai.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-Public-Comment.pdf.

* * *

V. Solutions

The persistence and proliferation of such discriminatory conduct highlights the need for further action. To address these harms, Congress should enact legislation regulating the use of algorithmic technologies that prioritizes civil rights and consumer protections. It should include the following six core principles for ensuring that algorithmic systems and related data practices are safe, effective, and fair.

First, AI regulation must seriously address algorithmic bias and discrimination through bright line rules and effective examination of the technology deployed. Legislation should include specific anti-discrimination provisions to prohibit algorithmic discrimination, including disparate impact. These should include narrow but reasonable exceptions for self-testing to prevent or mitigate bias and for diversifying a consumer pool through outreach to underserved communities. The anti-discrimination provision from the bipartisan American Data Privacy and Protection Act, which passed out of the U.S. House Energy & Commerce Committee last year on a 53-2 vote, is a good model.36

Second, AI should be evaluated and assessed, both before and after deployment, for discrimination, bias, and other harms. Legislation should require developers and deployers to engage an independent auditor to evaluate the algorithm's design, how it makes or contributes to decisions about significant life opportunities, how the algorithm might produce harm and how that harm can be mitigated. Deployers should then annually assess the algorithm as it is used, detailing any changes in its use or any harms it produces, including measuring disparate impacts. Developers should review these assessments to determine if the algorithm needs modifications, and the evaluations, assessments, and reviews should be publicly shared and reported to a federal regulator. Sunlight is the best disinfectant.37 Reviewing algorithms in both the design and deployment phases proactively detects and prevents harm and promotes responsible innovation.

Third, Developers and deployers of AI should have a duty of care requiring that the products they offer are safe and effective and be liable if they aren't. An algorithm is safe if it is evaluated by a pre-deployment assessment, reasonable steps are taken to prevent it from causing harm, and its use is not unfair or deceptive. An algorithm is effective if it functions as expected, intended, and publicly advertised. Legislation should also prohibit a developer or deployer from engaging in deceptive marketing, off-label uses, and abnormally dangerous activities. Establishing a duty...

36 See American Data Privacy and Protection Act ("ADPPA"), H.R. 8152, 117th Cong. Sec. 207(a) (2022), https://www.congress.gov/bill/117th-congress/house-bill/8152/text/rh.

37 See Louis D. Brandeis, What Publicity Can Do, Harper's Weekly (Dec. 20, 1913) ("Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.").

* * *

...of care ensures that companies must take adequate steps to protect consumers and make sure that their products work as intended.

Fourth, AI regulation should include transparency and explainability requirements so that consumers know when, how, and why a company is using AI and how it affects them. Companies must provide individuals with easy-tounderstand notices about whether and how an algorithmic system affects their rights. A regulator should be empowered to write rules for when and how a company needs to provide individualized explanations and rights to appeal decisions informed by AI. Companies also need to publish annual reports about their impact assessments anddata practices. Without public transparency, individuals cannot make informed decisions about how they interact with algorithmic systems and are unable to seek redress when harm occurs.

Fifth, there should be data protection requirements, so that AI is not trained on the data of people who have not consented to it and to safeguard consumer's privacy. Training and testing AI to make it fair and unbiased requires not just a lot of personal data, but a lot of highly sensitive personal data, like race and sex information. For consumers to be willing to share that information, they need to be able to trust that it will not be misused for secondary purposes and that will be kept secure. Developers and deployers should be required to collect and use only as much personal data as is reasonably necessary and proportionate to provide the services that consumers expect, and to safeguard that data. Developers should have an additional requirement to get affirmative express consent to use personal data to train algorithms. Individuals also should be able to access, correct, and delete their ersonal data. These data protection requirements are necessary to enable individuals to safely share their sensitive personal information without fear.

Sixth, legislation should establish robust oversight and multiple levels of enforcement. This must include a private right of action. Individuals need to be able to vindicate their own rights in court because, historically, Black people and other people of color could not rely on government agencies to defend their rights. There should also be enforcement by state attorneys general and a federal agency. The federal agency needs regulatory authority as well to effectively regulate and mandate compliance with technical aspects of AI regulation, like auditing and transparency. Purveyors of algorithmic systems infringing people's rights should not be immune from liability.

VI. Conclusion

While the threats of AI are often described as matters of futuristic science fiction, algorithmic tools are already harming Black people and other communities of color every day. As Vice President Kamala Harris recently warned about growing concern that AI could pose an existential threat to humanity, "There are additional threats that also demand our action--threats that are currently causing harm and which, to many people, also feel existential."

38 Congress must act because otherwise communities of color will keep bearing these burdens. By implementing bright-line rules and guardrails on the development and deployment of algorithmic systems,

Congress can unlock this technology's potential to expand opportunities and level the playing field. We appreciate this Committee's attention to this important issue and look forward to working with the Committee to advance civil rights and consumer protections for AI.

38 SPEECHES AND REMARKS, WHITE HOUSE, Remarks by Vice President Harris on the Future of Artificial Intelligence | London, United Kingson (Nov. 1, 2023), https://www.whitehouse.gov/briefingroom/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificialintelligence-london-united-kingdom/.

* * *

Original text here: https://www.judiciary.senate.gov/imo/media/doc/2023-12-13_pm_-_testimony_-_hewitt.pdf

Older

Another year of sure-fire predictions [The Orange County Register]

Newer

Section 184 loans increase homeownership opportunities Submitted by Bay Bank

Advisor News

  • Flexibility is the future of employee financial wellness benefits
  • Bill aims to boost access to work retirement plans for millions of Americans
  • A new era of advisor support for caregiving
  • Millennial Dilemma: Home ownership or retirement security?
  • How OBBBA is a once-in-a-career window
More Advisor News

Annuity News

  • An Application for the Trademark “DYNAMIC RETIREMENT MANAGER” Has Been Filed by Great-West Life & Annuity Insurance Company: Great-West Life & Annuity Insurance Company
  • Product understanding will drive the future of insurance
  • Prudential launches FlexGuard 2.0 RILA
  • Lincoln Financial Introduces First Capital Group ETF Strategy for Fixed Indexed Annuities
  • Iowa defends Athene pension risk transfer deal in Lockheed Martin lawsuit
More Annuity News

Health/Employee Benefits News

  • Check health plan Jan. 1
  • Torrance teachers make deal with district over salary health insurance costs
  • Researchers from New York Medical College Detail Findings in Insurance (Delivering health insurance coverage for babies: A primer on Medicaid for perinatal care providers): Insurance
  • Toni Says
  • Flexibility is the future of employee financial wellness benefits
Sponsor
More Health/Employee Benefits News

Life Insurance News

  • Inszone Insurance Services Expands Benefits Department in Michigan with Acquisition of Voyage Benefits, LLC
  • Affordability pressures are reshaping pricing, products and strategy for 2026
  • How the life insurance industry can reach the social media generations
  • Judge rules against loosening receivership over Greg Lindberg finances
  • KBRA Assigns Rating to Soteria Reinsurance Ltd.
More Life Insurance News

- Presented By -

Top Read Stories

  • How the life insurance industry can reach the social media generations
More Top Read Stories >

NEWS INSIDE

  • Companies
  • Earnings
  • Economic News
  • INN Magazine
  • Insurtech News
  • Newswires Feed
  • Regulation News
  • Washington Wire
  • Videos

FEATURED OFFERS

Slow Me the Money
Slow down RMDs … and RMD taxes … with a QLAC. Click to learn how.

ICMG 2026: 3 Days to Transform Your Business
Speed Networking, deal-making, and insights that spark real growth — all in Miami.

Your trusted annuity partner.
Knighthead Life provides dependable annuities that help your clients retire with confidence.

Press Releases

  • Two industry finance experts join National Life Group amid accelerated growth
  • National Life Group Announces Leadership Transition at Equity Services, Inc.
  • SandStone Insurance Partners Welcomes Industry Veteran, Rhonda Waskie, as Senior Account Executive
  • Springline Advisory Announces Partnership With Software And Consulting Firm Actuarial Resources Corporation
  • Insuraviews Closes New Funding Round Led by Idea Fund to Scale Market Intelligence Platform
More Press Releases > Add Your Press Release >

How to Write For InsuranceNewsNet

Find out how you can submit content for publishing on our website.
View Guidelines

Topics

  • Advisor News
  • Annuity Index
  • Annuity News
  • Companies
  • Earnings
  • Fiduciary
  • From the Field: Expert Insights
  • Health/Employee Benefits
  • Insurance & Financial Fraud
  • INN Magazine
  • Insiders Only
  • Life Insurance News
  • Newswires
  • Property and Casualty
  • Regulation News
  • Sponsored Articles
  • Washington Wire
  • Videos
  • ———
  • About
  • Advertise
  • Contact
  • Editorial Staff
  • Newsletters

Top Sections

  • AdvisorNews
  • Annuity News
  • Health/Employee Benefits News
  • InsuranceNewsNet Magazine
  • Life Insurance News
  • Property and Casualty News
  • Washington Wire

Our Company

  • About
  • Advertise
  • Contact
  • Meet our Editorial Staff
  • Magazine Subscription
  • Write for INN

Sign up for our FREE e-Newsletter!

Get breaking news, exclusive stories, and money- making insights straight into your inbox.

select Newsletter Options
Facebook Linkedin Twitter
© 2025 InsuranceNewsNet.com, Inc. All rights reserved.
  • Terms & Conditions
  • Privacy Policy
  • InsuranceNewsNet Magazine

Sign in with your Insider Pro Account

Not registered? Become an Insider Pro.
Insurance News | InsuranceNewsNet