Skip to content

Coral: Toxic Comments Experience

Led product design for Coral's toxicity detection integration, creating moderation workflows that balanced ML scoring with human judgment. This groundbreaking approach became foundational to modern content moderation practices across platforms serving billions of users.

  • Product Design
  • UX Strategy
  • UX Research
Toxic Comments moderation queue with toxicity scores and context for human review

Toxic Comments Experience

Overview

Between 2016–2019, The Coral Project (a collaboration among Mozilla, The New York Times, and The Washington Post) developed open-source tools to improve online community interactions. One of its flagship efforts was integrating early machine learning toxicity detection via Google’s Perspective API, allowing moderators to review potentially harmful comments more efficiently.

Problem

Online news comment sections were increasingly overwhelmed by harassment, spam, and hate speech. Automated systems like Disqus’s early “toxic filter” often hid or deleted comments outright, removing human judgment from the loop and eroding user trust.

Coral took a different stance: moderation is a conversation, not a purge.

Coral’s Approach

List youInstead of treating AI as a gatekeeper, Coral used it as an assistant:

This approach prioritized transparency, human agency, and contextual review—values still rare in early AI ethics work.r role, collaborators, and primary responsibilities.

Toxic Comments moderation queue showing toxicity scores alongside comments
Moderators reviewed Perspective API scores without losing conversation context.

Designing the Experience

These principles were translated into an interface that:

Impact

Next engagement

Shape your moderation strategy with care

Let’s explore how thoughtful tooling and messaging can help your community thrive without sacrificing safety or trust.