Power Echoes: Investigating Moderation Biases in Online Power-Asymmetric Conflicts

Two women sit at a desk in a modern office workspace, engaged in discussion while looking at a laptop computer; one woman wears glasses and a dark blue cardigan, the other wears a blue hijab, and both appear focused and engaged in conversation.
Image Credit: Photo by UX Indonesia on Unsplash (SourceLicense)

AI Summary of Scholarly Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓

Publication Signals show what we were able to verify about where this research was published.MODERATECore publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
  • ✔ Published in indexed journal
  • ✔ No retraction or integrity flags

Key findings from this study

This research indicates that:

  • Human moderators exhibit systematic biases that favor powerful parties in asymmetric online conflicts.
  • AI assistance reduces most human moderation biases but amplifies biases in certain contexts.
  • Human-AI collaborative moderation requires context-specific design to address power-related prejudices effectively.

Overview

This study investigates moderation biases that human moderators exhibit when managing online conflicts between parties with unequal power (e.g., consumers versus merchants). The research examines both unaided human moderation and human decision-making with AI assistance in power-asymmetric contexts. Fifty participants reviewed real consumer-merchant disputes and rendered moderation decisions under experimental conditions.

Methods and approach

The researchers conducted a mixed-design experiment with 50 participants using authentic conflicts between consumers and merchants. Participants performed moderation tasks under two conditions: standard human moderation and moderation with AI-generated suggestions. The study measured bias patterns toward supporting the powerful party across both moderation modes.

Results

Human moderators demonstrated systematic biases favoring the more powerful party in power-asymmetric conflict scenarios. AI-assisted moderation reduced most biases present in unaided human moderation, improving decision fairness across multiple dimensions. However, AI suggestions also amplified certain biases in specific contexts, indicating that algorithmic assistance does not uniformly eliminate moderator prejudices.

Implications

These findings suggest that platform moderation systems require explicit mechanisms to address power imbalances during conflict resolution. Organizations implementing human-AI collaborative moderation should account for contexts where algorithmic guidance reinforces rather than mitigates bias. Future moderation system design must consider not only how to reduce moderator bias but also how AI assistance interacts with existing human prejudices in asymmetric power dynamics.

The research indicates that relying solely on AI suggestions without additional safeguards remains insufficient for equitable moderation. Platforms should integrate bias audits specific to power-asymmetric disputes into their AI system evaluations. Training protocols for human moderators must explicitly address power-related biases and the selective effectiveness of algorithmic support across different conflict types.

Scope and limitations

This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.

Disclosure

  • Research title: Power Echoes: Investigating Moderation Biases in Online Power-Asymmetric Conflicts
  • Authors: Yaqiong Li, P. Zhang, Peixu Hou, Kainan Tu, Guangping Zhang, Shan Qu, Wenshi Chen, Yan Chen, Ning Gu, Tun Lu
  • Institutions: Fudan University, Virginia Tech
  • Publication date: 2026-04-13
  • DOI: https://doi.org/10.1145/3772318.3791694
  • OpenAlex record: View
  • Image credit: Photo by UX Indonesia on Unsplash (SourceLicense)
  • Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.

Get the weekly research newsletter

Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.

More posts