AI Summary of Scholarly Research
This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓
Publication Signals show what we were able to verify about where this research was published.MODERATECore publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
- ✔ Published in indexed journal
- ✔ No retraction or integrity flags
Key findings from this study
This research indicates that:
- Participatory AI frameworks often fail to achieve substantive engagement, defaulting instead to tokenistic consultation or hidden labor extraction.
- Situated knowledge production conflicts structurally with the global generalizability requirements of foundation models and large-scale AI systems.
- Meaningful participation remains difficult to operationalize within current AI development timelines and commercial constraints.
Overview
Participatory AI emerges as a methodological response to address unintended harms from machine learning systems on marginalized communities. The panel examines structural tensions between participatory approaches and contemporary AI development practices, particularly regarding scale and generalizability.
Methods and approach
The panel convenes expert discussion to interrogate participatory AI frameworks. Specific focus includes examination of participation quality, labor dynamics within participatory processes, and conflicts between situated knowledge and foundation model scalability.
Results
Participatory AI demonstrates significant limitations in advancing social justice objectives. Participation often remains short-term, consultative in nature, or functions as unpaid labor extraction. The fundamental tension between grounded, context-specific knowledge production and the global generalizability required by foundational models creates persistent barriers to meaningful integration of participatory insights into system design and deployment.
Implications
Organizations implementing participatory AI must address structural conditions that enable extractive rather than reciprocal engagement. The scalability demands of large language models and foundation models may require foundational reassessment of how participatory input translates into technical constraints and design decisions. Future research must establish frameworks for evaluating participation quality and measuring concrete impacts on marginalized communities.
Scope and limitations
This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.
Disclosure
- Research title: Participatory AI & Social Justice
- Authors: Maria Luce Lupetti, Christina Harrington, Massimo Menichinelli, Cristina Zaga, Laura Forlano, Alessandro Bozzon, Q. Vera Liao
- Institutions: Carnegie Mellon University, Delft University of Technology, Elisava Barcelona School of Design and Engineering, Northeastern University, Politecnico di Torino, Software Engineering Institute, University of Michigan, University of Twente
- Publication date: 2026-04-13
- DOI: https://doi.org/10.1145/3772363.3790077
- OpenAlex record: View
- Image credit: Photo by Theo Decker on Pexels (Source • License)
- Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.
Get the weekly research newsletter
Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.


