Deepfake, Real Harm: A Participatory Approach for Imagining Infrastructures to Combat Deepfake Sexual Abuse

Four people in a modern office workspace gathered around a laptop on a white table, collaborating and pointing at the screen with serious, focused expressions against a neutral beige background.
Image Credit: Photo by Icons8 Team on Unsplash (SourceLicense)

AI Summary of Scholarly Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓

Publication Signals show what we were able to verify about where this research was published.STANDARDAvailable publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.

Fewer signals were independently confirmable for this source. That reflects the limits of what’s on record — not a judgment about the research.

  • ✔ Published in indexed journal
  • ✔ No retraction or integrity flags

Key findings from this study

This research indicates that:

  • Current platform-specific moderation systems inadequately support the real-time monitoring and case management workflows of activists and survivors combating deepfake sexual abuse.
  • Content classification ambiguity, evidence collection barriers, and unmanaged workload escalation create operational friction that existing systems fail to address.
  • Multi-stakeholder coordination, data ownership transparency, and platform accountability represent necessary but currently absent system components for effective harm prevention.

Overview

Current platform-centric moderation systems for non-consensual intimate imagery (NCII) fail to address deepfake sexual abuse comprehensively. Existing systems operate reactively and misalign with operational workflows of real-time monitors and survivor supporters. This research examined gaps between system design and practitioner needs through participatory methods.

Methods and approach

Participatory design workshops engaged 10 activists from victim advocacy organizations and survivors with direct experience combating deepfake sexual abuse in South Korea. Workshop participants shared insights on current challenges, desired capabilities, and systemic barriers to effective response. Analysis synthesized findings across monitoring practices, evidence collection, and cross-platform coordination needs.

Results

Participants identified three primary challenges within existing moderation infrastructure: ambiguity in content classification creates decision-making bottlenecks; evidence collection barriers impede investigative processes; and monitoring activities increase workload intensity and safety risks for activists. Participants proposed design interventions spanning proactive content protection mechanisms, long-term case management systems, and cross-platform coordination capabilities. Stakeholders emphasized critical gaps in data governance, including unclear data ownership structures and insufficient platform accountability mechanisms that currently limit collaborative harm prevention.

Implications

System design must shift from reactive content removal toward integrated, multi-stakeholder infrastructures that coordinate across platforms while preserving evidence integrity. Implementation requires establishing shared data ownership frameworks and accountability mechanisms that extend beyond individual platforms. Policy interventions should mandate cross-platform response protocols and formalize support structures for activist and survivor participation in moderation workflows. Design of monitoring tools must prioritize secondary trauma reduction through workload distribution and psychological safety mechanisms. Effective deepfake abuse prevention depends on institutional recognition that activists and survivors constitute essential infrastructure whose operational needs demand structural accommodation rather than ad hoc support.

Scope and limitations

This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.

Disclosure

  • Research title: Deepfake, Real Harm: A Participatory Approach for Imagining Infrastructures to Combat Deepfake Sexual Abuse
  • Authors: Saetbyeol Leeyouk, Joseph Seering
  • Institutions: Korea Advanced Institute of Science and Technology, Massachusetts Institute of Technology
  • Publication date: 2026-04-13
  • DOI: https://doi.org/10.1145/3772318.3790902
  • OpenAlex record: View
  • Image credit: Photo by Icons8 Team on Unsplash (SourceLicense)
  • Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.

Get the weekly research newsletter

Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.

More posts