When Robots Should Break the Rules

Two people's hands and forearms are shown together holding or examining a black robotic arm component in a bright, modern indoor workspace with a white desk and laptop visible in the background.
Image Credit: Photo by ThisisEngineering on Unsplash (SourceLicense)

AI Summary of Scholarly Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓

Publication Signals show what we were able to verify about where this research was published.MODERATECore publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
  • ✔ Published in indexed journal
  • ✔ No retraction or integrity flags

Key findings from this study

  • The authors propose that deliberate robot rule-breaking constitutes a generative design strategy capable of improving interaction quality.
  • The study identifies seven major behavioral norms currently constraining HRI research: robots should always engage, help, be productive, remain polite, never lie, never err, and never model harm.
  • The researchers demonstrate that violations producing greater ethical sophistication, effectiveness, and social intelligence challenge foundational HRI assumptions about robot behavior.

Overview

The study examines tacit norms governing human-robot interaction research and design. Current assumptions dictate that robots should always engage, assist, remain productive, be polite, avoid deception, function flawlessly, and never model harmful behaviors. The authors contend these hardened conventions constrain the field's conceptual scope regarding meaningful robot-supported interactions.

Methods and approach

The authors conduct a conceptual analysis of prevailing rules in HRI and robotics. They identify constitutive traits that shape robot expectations and examine how these expectations become implicit design constraints. The analysis proceeds by considering deliberate rule violations as a generative design strategy.

Results

The authors argue that robots violating conventional behavioral norms can produce interactions demonstrating greater ethical sophistication, effectiveness, and social intelligence. Specific violations examined include interruption, refusal, misleading communication, and performance errors. These departures from established norms enable robots to engage in contextually appropriate behavior that strict rule adherence would preclude.

The framework establishes that reflexive examination of rule-breaking reveals underexplored interaction modalities. By treating violations as intentional design choices rather than failures, the field can expand its understanding of robot capabilities. The analysis demonstrates that social intelligence in robotics extends beyond conformity to behavioral expectations.

Implications

This reconceptualization has substantial implications for HRI research methodology and robotics design practice. Rather than treating rule violations as design failures or ethical lapses, researchers should evaluate contexts where breaking conventions produces superior outcomes. This shift requires developing frameworks for ethically justified deviation from behavioral norms.

The work suggests that robot design standards require substantial revision to accommodate context-dependent behavior. Engineering and design processes must move beyond prescriptive rule sets toward adaptive decision-making systems. Future HRI research should systematically investigate which violations enhance interaction quality in specific domains.

Scope and limitations

This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.

Disclosure

  • Research title: When Robots Should Break the Rules
  • Authors: Rebecca Ramnauth, Brian Scassellati
  • Institutions: Yale University
  • Publication date: 2026-03-10
  • DOI: https://doi.org/10.1145/3757279.3788815
  • OpenAlex record: View
  • Image credit: Photo by ThisisEngineering on Unsplash (SourceLicense)
  • Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.

Get the weekly research newsletter

Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.

More posts