top of page
Search
Arifa Abrahim

Section 230: The Legal Shield Perpetuating Algorithmic Discrimination in Big Tech

Section 230 of the Communications Decency Act shields “providers of interactive computer services” against liability arising from content generated by third parties.[1] In a generation marked by the prolific use of social media platforms that operate on user-generated content, the immunity conferred by Section 230 endows platforms like Meta and Twitter with an implicit license to field discriminatorily-targeted ads.[2]

This discriminatory practice, referred to as ‘digital redlining,’ enables social media platforms to algorithmically capitalize on personal data by targeting its advertisements to consumers based on protected characteristics such as race and gender.[3] For example, to curate ad-delivery on Facebook, the platform offers a map tool that enables advertisers to exclude users living within a particular geographic area from viewing an ad, simply by drawing a red line around a non-target region.[4] As residential segregation remains largely unremitted in the United States, an ad-targeting feature that filters by zip code can easily be used as a proxy to perpetuate existing racial disparities in housing.[5]

Facebook’s advertising platform also espouses more overt forms of digital redlining, allowing advertisers to exclude certain demographics by selecting from a drop-down menu that features hundreds of thousands of attributes and interests, such as “Hispanic Culture” and “women in the workforce.”[6] To test the scope of Facebook’s ad-delivery system, one user successfully purchased an advertisement aimed only at white house hunters within minutes, directly contravening with the Fair Housing Act (“FHA”).[7] The ease by which discriminatorily-targeted ads may be disseminated on social media by third-party advertisers represents a form of algorithmic oppression, for which Section 230 ultimately immunizes.[8]

While the enactment of Section 230 in 1996 was precipitated by an urgency to insulate minors from pornography online, it also served to promote free expression and empower tech companies with the discretionary authority to self-regulate its content.[9] However, the laissez-faire approach to online regulation adopted by Congress in 1996 is unsustainable in the age of social media.[10] With the influx of user-generated content and advancements in personal data collection in Big Tech, social media platforms have virtually been handed a blank check under Section 230 to profit from discriminatory advertising at the expense of historically marginalized groups.[11]

The immunity granted by Section 230 siphons the virtual space into its own legal vacuum, effectively immunizing Big Tech companies from established civil-rights and consumer-protection laws when the illegal content is generated by third-parties.[12] After lawsuits were filed by civil rights groups alleging that Facebook hosted discriminatory advertisements in violation of the Fair Housing Act, Meta reached a settlement with the U.S. Department of Justice in June 2022, agreeing to build a new algorithm specific for housing ads.[13] Pursuant to the settlement, Meta developed the Variance Reduction System, a new algorithm designed to bridge the gap between the eligible audiences and actual audiences for housing advertisements, and operatively, eliminate ad-stratification based on characteristics protected under the Fair Housing Act such as sex and race.[14] In accordance with the settlement agreement, Meta must also cease delivering housing ads to profiles who “look like” other users and withhold these targeting options from advertisers, essentially insulating data from third-parties that would otherwise reveal a user’s connection to a FHA-protected class.[15]

However, correction of one narrow algorithm cannot displace the inherent bias that continues to perpetuate the platform with regard to age- and gender-specific employment ads that conceal opportunities for protected groups.[16] Ultimately, Big Tech companies are the most cost-avoidant in terms of preempting digital redlining when designing algorithms, so it logically follows that Section 230 should be interpreted more narrowly to dispose of absolute immunity and impose accountability on social media conglomerates when their algorithms enable third-parties to transmit discriminatory ads.[17]


[1] 47 U.S.C. §230. [2] Julia Angwin, It’s Time to Tear Up Big Tech’s Get-Out-of-Jail-Free Card, The N. Y. Times (Feb. 2023). https://www.nytimes.com/2023/02/20/opinion/facebook-section-230-supreme-court.html?searchResultPosition=8. [3] Linda Morris & Olga Akselrod, Holding Facebook Accountable for Digital Redlining, ACLU (Jan. 2022) https://www.aclu.org/news/privacy-technology/holding-facebook-accountable-for-digital-redlining. [4] HUD v. Facebook (U.S. Dep’t of Hous. and Urb. Dev., March 28, 2019). [5] Morris & Akselrod, supra note 3. [6] Angwin, supra note 2. [7] Id. [8] Id. [9] Olivier Sylvain, Platform Realism, Informational Inequality, and Section 230 Reform, The Yale J. F. 475, 476 (Nov. 2021). [10] Id. at 478. [11] Id. at 477. [12] Id. at 501. [13] Angwin, supra note 2. [14] Press Release No. 23-18, U.S. Dep’t of Just., Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements (Jan. 9, 2023), https://www.justice.gov/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreaking#:~:text=As%20the%20complaint%20alleged%2C%20Meta's,users%20who%20actually%20see%20the. [15] Id. [16] Angwin, supra note 2. [17] Id.

107 views0 comments

Recent Posts

See All

Comments


bottom of page