STREAM EXCLUSIVE ORIGINALS

Algorithmic Redlining: How AI Is Learning to Discriminate

From suppressed Black creators to biased ad targeting, today’s algorithms are learning—and scaling—the same racial discrimination we fought to dismantle offline.

In the 20th century, redlining was a physical practice—maps drawn with thick red borders that told banks where not to lend, cities where not to invest, and white families where to flee. Today, the borders are invisible, the maps are digital, and the gatekeepers are algorithms. The result is just as destructive—and it’s happening on platforms we use every day.

We’ve entered an era of algorithmic redlining—where artificial intelligence and platform algorithms decide what we see, who sees us, and whose voices carry. On paper, it’s all math and code. In reality, these systems are learning to replicate—and in some cases amplify—the very racial biases that civil rights laws were designed to dismantle.

Consider this: In 2023, a civil rights audit of Facebook found its ad delivery algorithms were disproportionately steering housing and job listings away from Black and Latino users, even when advertisers had not targeted by race. On YouTube, researchers found the recommendation engine more frequently pushed white nationalist content to users who engaged with political topics—while suppressing progressive and racial justice channels in “related videos.”

They Let a Robot Take My Job: A Black Woman on the Real Cost of AI

And then there’s the growing body of evidence that Black creators see their content deprioritized on platforms like TikTok and Instagram, even when engagement is high. “Shadowbanning” isn’t just an influencer problem—it’s a free speech issue when those shadowbans disproportionately silence political activism, equity campaigns, and culturally specific language patterns like African American Vernacular English (AAVE).

AI Doesn’t Erase Bias—It Scales It

When you ask tech executives why this happens, you’ll often hear: “The algorithm is neutral. It’s just optimizing for engagement.” But “engagement” is built on historical data—and historical data is full of inequality.

If an AI hiring tool trains on decades of résumés from a white-majority workforce, it learns to prefer white-sounding names. If a moderation model is built using biased human judgments, it will flag “Black” hairstyles or cultural expressions as “unprofessional” or “inappropriate.” If recommendation engines have learned that sensationalist or extremist content keeps users hooked, they’ll push it—even if it undermines public trust, civic participation, or community safety.

Without intervention, these systems don’t just mirror bias—they industrialize it.

Algorithmic redlining is more than a tech ethics issue—it’s a civil rights issue. In an era when political organizing, news consumption, job hunting, and even small business sales happen online, biased algorithms can cut Black communities off from opportunity, information, and influence.

  • Political Power: Suppressed visibility of Black-led campaigns can weaken voter outreach and mobilization.
  • Economic Opportunity: Discriminatory ad delivery can block access to jobs, housing, and financing.
  • Cultural Representation: Content suppression limits our ability to tell our own stories, shape narratives, and monetize our creativity.

In other words, if the internet is the new public square, algorithmic redlining decides who gets a microphone and who gets put behind the digital rope.

Regulation Is Late—But Not Impossible

The Civil Rights Act never anticipated a world where discrimination could be executed by a line of code, but its spirit still applies. We need:

  1. Algorithmic Audits: Independent, third-party testing for racial bias in recommendation systems, ad delivery, and content moderation.
  2. Transparency Mandates: Platforms should disclose how algorithms work, what data they train on, and how moderation decisions are made.
  3. Civil Rights Enforcement in Tech: The Department of Justice and FTC must treat algorithmic discrimination as seriously as offline discrimination—because the harm is just as real.
  4. Community Ownership: Investment in Black-owned tech platforms and AI tools that are built with equity at the core.

The New Line in the Sand

Redlining maps of the past kept Black people out of certain neighborhoods. Algorithmic redlining keeps us out of certain conversations, markets, and opportunities—sometimes without us even knowing it’s happening.

The danger isn’t just that AI will get smarter. It’s that it will get smarter in replicating the same structural racism we’ve been fighting for generations—only faster, quieter, and harder to prove.

If we don’t draw a new line in the sand now—demanding accountability from Silicon Valley to Capitol Hill—we risk letting digital borders replace the physical ones we fought so hard to erase.

Latest News

Subscribe for BET Updates

Provide your email address to receive our newsletter.


By clicking Subscribe, you confirm that you have read and agree to our Terms of Use and acknowledge our Privacy Policy. You also agree to receive marketing communications, updates, special offers (including partner offers) and other information from BET and the Paramount family of companies. You understand that you can unsubscribe at any time.