What the study found
The study maps large language model guardrails as part of conversational interfaces for sociotechnical control. It finds that guardrails do more than block content: they also shape which conversations are limited and which are promoted, and they can help co-construct ideology through both computational and linguistic language.
Why the authors say this matters
The authors present guardrails as a case study for applying critical code studies to large-scale AI, and they suggest this helps analyze how foundation models are moderated and refined. The study indicates that these systems are part of the visible and invisible boundaries of generative AI, where code both regulates conversation and becomes part of what is discussed.
What the researchers tested
The researcher examined guardrails from four companies: Anthropic, DeepSeek, Meta, and OpenAI. The analysis included both general-purpose models and moderation API tools, using endpoint documentation, code examples, technical reports, model architectures, training dataset content, and methodology research.
What worked and what didn't
The study reports that guardrails can define and limit certain conversations through filters while also promoting others. It also describes guardrails as a way that technical construction and ideology are linked, though the abstract does not separate these into success and failure outcomes.
What to keep in mind
The abstract does not describe experimental measurements or comparative performance results. It also does not state specific limitations of the study beyond its focus on four companies and the materials analyzed.
Key points
- The study examines large language model guardrails as a case study in critical code studies.
- It analyzes guardrails from Anthropic, DeepSeek, Meta, and OpenAI.
- The abstract says guardrails can limit some conversations while promoting others.
- The authors say guardrails co-construct ideology through computational and linguistic language.
- The abstract does not provide experimental metrics or comparative performance results.
Disclosure
- Research title:
- Study maps how AI guardrails shape language and control
- Authors:
- Sarah Ciston
- Institutions:
- Academy of Media Arts Cologne, Center for Advanced Internet Studies
- Publication date:
- 2026-03-10
- OpenAlex record:
- View
Get the weekly research newsletter
Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.


