Artificial Intelligence, the Consolidation of Power, and (Resisting) the Political Order
A RhetAI Coalition research challenge deliverable
We at the RhetAI Coalition are pleased to release the results of the 2025 research challenges. Each week, we will publish a deliverable. If you are interested in joining a current or upcoming research challenge, please reach out!
This deliverable was authored by:
Matthew Salzano, Stony Brook University
Katherine Johnston, Stony Brook University
Drew Ashby-King, East Carolina University
Sara Santos, Stony Brook University
Nasrin Pervin, North South University
Benjamin Miller, University of Sydney
Atin Basuchoudhary, Virginia Military Institute
They responded to the following challenge question:
How are emerging technologies and concentrations of power impacting institutions and the political order? What rhetorical traits of emerging technologies like AI lead to concentrations of power? How do their persuasive capabilities facilitate these concentrations of power and impact political institutions?
As emerging technologies enter and reshape our political and rhetorical landscapes, what efforts have emerged to attempt to claim rhetorical agency and exert influence during this upheaval? Perhaps one of the most powerful and important answers to this question is in Timnit Gebru and Émile P. Torres’s 2024 article “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.” Gebru and Torres persuasively argue that dominant contemporary movements tied to artificial intelligence (AI) (especially transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism, which they turn into the acronym TESCREAL) are united in their attempts to advance a eugenicist political order.[1] These philosophies are represented at most of the major Big Tech and AI firms. And, not coincidentally, the emerging technologies they create tend to amplify the concentrated power arrangements of the status quo.[2] This is the political and rhetorical order in which we find ourselves.
Technical and/or academic fields who have claimed to navigate these issues—like AI ethics—have been thoroughly critiqued for their political impotence and general “uselessness” in response to the political order they exist within.[3] Now, more than three years after the release of ChatGPT, Generative AI is not exclusively a technical or academic problem but a thoroughly public one.[4] As a result, new and existing organizations have emerged to attempt to assert rhetorical agency and influence that resists and/or modifies these efforts.
So, in order to answer RhetAI Challenge Question 6, we decided to ask: In response to the hegemonic political order taking shape around emerging technologies like AI, what organizations are contributing to collective efforts to shape the future of AI? Is there a shared but unnamed set of commitments—an anti-racist, anti-eugenicist, liberatory inverse of the TESCREAL Bundle? What would their guiding rhetorics be in order to challenge dominant rhetorics?
Approach/Methods
In order to begin answering these questions, the research team gathered an exhaustive list of AI organizations. We sought nonprofit organizations working on ethical, responsible, critical, democratic, sustainable, caring and/or just approaches to data, technology, and/or AI that are involved in collective efforts to shape the future of AI development, deployment, and discourse beyond the dictates and vested interests of dominant power structures. In addition to this general definition, we excluded any organizations making profit (e.g. through consulting services) or that were exclusively academic (producing only academic scholarship or are only for faculty/students of their institution). First, we named organizations from our knowledge of the landscape; then, we engaged in a web search to identify a longer list (Appendix 2. Search Terms). After reviewing the list against our inclusion criteria and excluding organizations that did not meet those criteria, our resulting list comprised 53 organizations (Appendix 1. List of Organizations Identified). We did not limit our search based on country, but we were limited to websites published in English.
For each organization, we downloaded their About Us pages between August and November 2025. We conducted rhetorically-informed textual analysis in two rounds of analysis, so that each organization was reviewed by at least two researchers. In addition to analyzing the About Us pages of each organization, we also examined each organization’s website holistically to ensure we accounted for the broader context of their public-facing communication. In our analysis, we sought to identify what underlying values, premises, methods, and ideologies these organizations represented in their public-facing rhetoric about their work. After each round of analysis, we met as a team to engage in a collaborative sense-making process to identify larger shared themes across these organizations’ discourse.
Findings: Three Premises
Through our sense-making process, we realized that one way to understand the diverse approaches and rhetorics these organizations took was by identifying their theories of change. Theory of change is a term of art in the social change sector. According to the Annie E. Casey Foundation, “A theory of change can refer to the beliefs and assumptions about how a desired change will happen or a goal will be realized. The term also can describe a specific product that expresses those beliefs and assumptions by depicting how strategies relate to expected outcomes and ultimate goals.”[5] While we did not encounter many of the latter (formal products illustrating or describing a theory of change), about pages often indicated an organization’s belief and assumptions about how change should be made.
In turn, we determined that we might catalog the organizations by the underlying, fundamental premises of their theories of change. That is: how do their self-descriptions demonstrate where they believe the “problems” and “solutions” of AI lie? Some organizations will fit into multiple premises, because they may hold multiple theories of change (or multiple methods for a larger theory of change).
Premise 1: AI issues are primarily technical, and technical problems need technical fixes.
Organizations following this premise asserted (implicitly or otherwise) that whatever is wrong (or fixable) with AI is at the level of the technical. By technical, we mean not only computational issues, but also development, research, and voluntary technical guidelines, codes, norms, and frameworks. Here, we have included organizations that: (1) embrace “AI for good” and seek technical solutions to make AI better or more just, (2) develop guardrails including technical frameworks and ethical guidelines for developers to use during the AI development process, and (3) maintain AI technology is fundamentally irresolvable and development must be stopped. All three of these subthemes extend from the premise: that addressing technical matters are key in that organization’s theory of change.
For example, Data 4 Black Lives describes itself as “an independent organization that intervenes in the AI/tech process by challenging discriminatory uses of data and algorithms, advocating for equitable data practices, and promoting responsible data science to benefit Black communities, the organization produces research, advocacy initiatives, and movement-building efforts aimed at supporting grassroots racial justice organizations.”[6] As such, “D4BL applies data science to create concrete and measurable change in the lives of Black people,” working “to identify systemic inequities and develop actionable solutions, outputs include reports, frameworks, workshops, and campaigns that empower communities to hold institutions accountable, the methodology emphasizes both upstream intervention in data design and downstream engagement with impacted communities to ensure ethical, just, and actionable outcomes.” In their own words, “With our national network of over 20,000 scientists and activists, we are building a future in which data and technology are forces for good, rather than the instruments of oppression, in Black communities.” While this organization clearly articulates their goal of improving “downstream engagement with impacted communities,” their vision for more equitable and just AI outcomes depends, in large part, on improving “data design” and “data practices.”
Similarly, The Indigenous Protocol and AI Working Group strives to develop “new conceptual and practical approaches to building the next generation of A.I. systems,” that would specifically benefit Indigenous communities. Also recognizing AI’s harms to vulnerable populations, the Design Justice Network, likewise focuses on building “design processes” that center “people who are normally marginalized by design” and name ten specific design principles that they think should be upheld. Characteristic of this cluster, these organizations focus their efforts on improving the systems that disproportionately negatively impact marginalized communities.
Premise 2: AI issues can/should primarily be resolved through government intervention and regulation.
Organizations following this premise focused on policy, broadly construed. They state their desire to influence legislators, regulators, and other government actors. Some organizations also aim to connect with stakeholders to educate them on existing policy from governing bodies; although, their primary audience is AI developers and users in order to increase effective implementation in the regulatory landscape. Some organizations also express dissatisfaction about AI’s technical design, but largely these organizations do not make claims about AI itself being good or bad. Instead, they focus on how to best regulate various stages of AI development and use so that it is responsibly and ethically developed, trained, and deployed.
For example, the organization SUPERRR aims to integrate “futuring practices together with intersectional feminist approaches to address tech policy challenges.” In addition to explaining the systemic process of “futuring” and clarifying the reasons why a “feminist intersectional approach to policy and futuring is crucial,” this organization highlights their interest in influencing the “guidelines, principles, and regulations formulated by governments, organizations, or institutions to govern the development and use of technology.” For SUPERRR, building more inclusive futures depends upon ensuring that digital policy is crafted to actively improve social inequalities.
The organization AI Now also attempts to refocus the conversation away from fears and wishfulfillments and toward developing policy that addresses ongoing harms of AI. In their own words, “AI Now develops policy strategies to redirect away from the current trajectory: unbridled commercial surveillance, consolidation of power in very few companies, and a lack of public accountability. In the years since its founding, AI Now has set the bar for discourse-shaping work that focuses on the social consequences of AI and the industry behind it. AI Now’s leadership were invited to advise the US Federal Trade Commission on artificial intelligence in 2021, an honor that recognized the significance of the organization’s work and provided a significant opportunity for impact.” From this self-description, it’s clear that AI Now regards policy as the largest lever for addressing both the consolidation of power by tech companies and the harms created by “unbridled” technologies.
Premise 3: AI issues are primarily social. We need to change public discourse and opinion about AI and Tech.
Organizations following this premise aimed to reshape public opinion about AI by intervening in the public sphere. These organizations published reports; engaged in narrative and storytelling work; hosted events, trainings, workshops; organized disruptive and even protest actions; and so on. While organizations had diverse perspectives—ranging from advocating for education in AI literacy to disruptive sabotage of college-to-Big-Tech pipelines—they shared a commitment to social intervention. They sought to undercut unethical/irresponsible AI initiatives or practices by seeking to build coalitions and/or change perceptions in the public sphere.
For example, multiple organizations in this category focused on the connection of emerging technology and AI with civics and education. The Montreal AI Ethics Institute described themselves as an “international non-profit dedicated to democratizing AI ethics literacy.” They believe such work is important because “civic competence is the foundation of change,” and AI will become increasingly influential in civic life. Accordingly, they are committed to ensuring that everyone has equal access to understanding it. In their words, “by equipping citizens, policymakers, and industry leaders with the knowledge to navigate AI’s societal impact, we empower them to shape a more responsible and ethical future.” The assumption seems to be that if you democratize AI literacy education, then that will lead to more democratic development and deployment of AI.
On the other hand, Civics of Technology takes a slightly more forthrightly critical stance. In their works, they “seek to revive an older idea, largely lost to school curriculum dialogues, for technology education that challenges students to critically inquire into the collateral, disproportionate, and unexpected effects of technology on our lives. Across our projects, we work to advance a civics of technology in schools and society that struggles for just democracy.” Rather than seeing AI development as what needs to be fixed through changing public knowledge, Civics of Technology aims to support democracy through critical learning about AI and technology. Both organizations focus their work on public outreach and educational initiatives.
Some organizations following this premise attempt to activate saboteurs who will interrupt unethical uses of AI. For example, No Tech for ICE, is an ongoing campaign “against unscrupulous technology companies and their digital arsenals,” supported by larger Latinx/Chicanx movement organization Mijente. In their words, they seek to disrupt the “cozy alliance” between U.S. Immigrations and Customs Enforcement and tech companies by “exposing tech’s outsized role in criminal justice justice and immigration enforcement, educating communities about how to protect themselves against new forms of criminalization, taking direct action to confront corporate actors, organizing with tech workers and students to leverage their influence over Silicon Valley, and targeting specific companies with demands to end their collaborations.” Leaving policy-writing to other organizations, they focus on direct public actions that kneecap these collaborations whether by cutting off their labor force or by turning public sentiment against tech companies so they cancel their contracts.
Implications: Means Matter
These three premises emphasize distinct theories of change and pressure points in the development and deployment of AI for collective good. However, by the very nature of our analysis leading us to focus on theories of change, our findings indicate a shared investment in process and means. Organizations across our list were invested in the how of AI. For the organizations operating primarily on premise 1, the technical decisions and processes of AI development largely determine its societal impacts and are, thus, the privilege point of intervention. For the organizations operating primarily on premise 2, AI harms are best mitigated by government regulation and policy interventions; that is, the means by which AI is deployed and implemented matters. For the organizations operating primarily on premise 3, public opinion, education, and values will largely determine the impacts of AI on society. Together these three groups privilege the process of developing, deploying, regulating, and understanding AI; they are still invested in realizing specific outcomes, but maintain that it matters how those outcomes are achieved.
“Means” is the locus of resistance shared among these organizations. For the focus on means stands in stark contrast to the notion that the ends justify the means, prevalent in the rhetoric of AI boosterism. Theorists like Adam Becker argue that TESCREAL organizations all operate on the shared notion that the ends justify the means: “This is the entire point. As long as billionaires like Elon Musk and Jeff Bezos couch the rationale for their behavior in the apocalyptic terms of longtermism and related ideas—that is, as long as they say that they’re doing what they’re doing in order to save the future of humanity—then they can cast their critics as enemies of civilization and our species.” In other words, “Almost anything can be justified in the name of saving the future of civilization.”[7] In contrast, the non-profit organizations highlighted in our study focus primarily on the process of AI development, deployment, and regulation. In short, they maintain that means matter.
Such conflict between ends and means is representative of broader, shifting contemporary political order(s) and culture(s). In The Postmodern Condition, Jean-François Lyotard argued that, in the second half of the Twentieth Century, “The grand narrative has lost its credibility, regardless of what mode of unification it uses, regardless of whether it is a speculative narrative or a narrative of emancipation. The decline of narrative can be seen as an effect of the blossoming of techniques and technologies since the Second World War, which has shifted emphasis from the ends of action to its means.”[8] Now, in the Twenty-First Century, our present AI boom—and the grand narratives of human achievement it proffers—seem to have helped swing the emphasis back toward prioritizing the ends of techniques and technologies over their means. We see this as a larger trend toward eliding processes: for example, AI systems can obscure their processes of generating outputs from inputs. One only needs to look to U.S. Presidential executive-order led approaches to governing to see, in action, the degrading value of “means” in a political culture.
In such an environment, it is common to hear leftist critique of liberals who are hamstrung by overemphasis on process (means) while the right-wing ignores those standards of governing in order to achieve their own ends.[9] The privileging of means over ends by non-profit organizations perhaps reinforces such a critique, and its arguments could be extended in this case. From a hegemonic perspective, this can be perceived as obstructionist or as simply slowing down companies and institutions attempting to keep pace with the rapid technological developments. From a more radically-critical perspective, overemphasizing means may not be a strategic choice if a focus on understanding and critiquing systems does not eventually offer “inspiring images” of what alternative futures could be made.[10] The question that remains is: how do these organizations themselves consolidate power while still maintaining their shared investment in how and by whom systems are built, regulated, and used?
First, it is worth recognizing and affirming this shared commitment to process. Given that this commitment helps ideologically connect an otherwise wide-ranging list of organizations, this mutual recognition should serve as a bridge for building broader coalitions, regardless of their specific points of emphasis.
Second, doubling down on their shared investment in process and means will ultimately help the organizations also articulate their broader visions for a democratic, just, and diverse future.
Future research directions
The team plans to continue analyzing the ideological underpinnings and commitments of the list of organizations we identified (relying primarily on their about pages) to create a more comprehensive and detailed picture of the landscape of AI reform and resistance among these organizations.
[1] Gebru, Timnit, and Émile P. Torres. “The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence.” First Monday, April 14, 2024. https://doi.org/10.5210/fm.v29i4.13636.
[2] For example: Safiya Noble, “The Loss Of Public Goods To Big Tech,” Noēma, July 1, 2020, https://www.noemamag.com/the-loss-of-public-goods-to-big-tech; Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Polity, 2019); Meredith Broussard, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (The MIT Press, 2023).
[3] Luke Munn, “The Uselessness of AI Ethics,” AI and Ethics 3, no. 3 (2023): 869–77, https://doi.org/10.1007/s43681-022-00209-w; Luke Stark, “Breaking Up (with) AI Ethics,” American Literature 95, no. 2 (2023): 365–79, https://doi.org/10.1215/00029831-10575148.
[4] Mike Ananny, “Making Generative Artificial Intelligence a Public Problem. Seeing Publics and Sociotechnical Problem-Making in Three Scenes of AI Failure,” Javnost - The Public 31, no. 1 (2024): 89–105, https://doi.org/10.1080/13183222.2024.2319000.
[5] Anne Gienapp and Cameron Hostetter, Overview of Theory of Change Concepts and Language, Developing a Theory of Change: Practical Guidance (Annie E. Casey Foundation, 2022), https://www.aecf.org/resources/theory-of-change, 2.
[6] All quotes from organizations appear on their websites, which are linked in Appendix 1. List of Organizations Identified.
[7] Adam Becker, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, (Basic Books, 2025), 27.
[8] Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, trans. Geoff Bennington and Brian Massumi (University of Minnesota Press, 1984), 37, emphasis added.
[9] Sophie Hayssen, “What’s Wrong With the Nonprofit Industrial Complex?,” Tags, Teen Vogue, September 7, 2022, https://www.teenvogue.com/story/non-profit-industrial-complex-what-is; Marc J. Dunkelman, “The Question Progressives Refuse to Answer,” Ideas, The Atlantic, April 2, 2025, https://www.theatlantic.com/ideas/archive/2025/04/democrats-need-to-want-to-build/682264/.
[10] Richard Rorty, Achieving Our Country: Leftist Thought in Twentieth-Century America, 1997 (Cambridge: Harvard University Press, 1999), 99.
Appendix 1. List of Organizations Identified (Alphabetical)
A+ Alliance https://aplusalliance.org/
Ada Lovelace Institute https://www.adalovelaceinstitute.org
AI & Democracy: Innovation Lab https://igp.sipa.columbia.edu/our-work/ai-democracy/innovation-lab
AI Now https://ainowinstitute.org
AI Policy Exchange https://aipolicyexchange.org/
AI4ALL https://ai-4-all.org/
AIDA Association https://aidaassociation.com/
Algorithm Watch AlgorithmWatch
Amnesty Tech, Amnesty International https://www.amnesty.org/en/tech/
Bangladesh NGOs Network for Radio and Communication (BNNRC) https://bnnrc.net/
Berkman Klein Center https://cyber.harvard.edu
BRAC (Social Innovation Lab) https://innovation.brac.net/
BytesForAll (B4A) https://bytesforall.net/
Center for AI and Digital Policy https://www.caidp.org/
Center for Humane Technology https://www.humanetech.com/
Civics of Technology https://www.civicsoftechnology.org/aboutus
Coding Rights https://codingrights.org/en/about-coding-rights/
Data 4 Black Lives https://d4bl.org/about.html
Data and Society https://datasociety.net
Design Justice Network https://designjustice.org
Distributed AI Research Institute (DAIR) https://www.dair-institute.org/
Electronic Frontier Foundation (EFF) Electronic Frontier Foundation | Defending your rights in the digital world
Equitable Internet Initiative https://detroitcommunitytech.org/eii
Fight for the Future https://www.fightforthefuture.org/
Future of Life Institute Home - Future of Life Institute
Global Partnership on AI https://oecd.ai/en/about/about-gpai
Healthy Country AI https://healthycountryai.org/about.php
Hiatus https://hiatus.ooo/en.html
Ida B Well Just Data Lab https://cdh.princeton.edu/engage/undergraduates/just-data-lab/
Indigenous Protocol and AI Working Group https://www.indigenous-ai.net/
Institute for Ethics in Artificial Intelligence https://www.ieai.sot.tum.de/
International Indigenous Data Sovereignty Interest Group (part of Research Data Alliance, RDA) https://www.rd-alliance.org/groups/international-indigenous-data-sovereignty-ig/members/all-members/
Just AI https://justai.in/
Maiam Nayri Wingara (Australian Indigenous Data Sovereignty) https://www.maiamnayriwingara.org/
Montreal AI Ethics Institute https://montrealethics.ai/
No Tech For ICE https://notechforice.com
Partnership on AI Partnership on AI - Home - Partnership on AI
Privacy International https://www.privacyinternational.org/about
Progressive Technology Project https://progressivetech.org
Public Interest Tech Lab https://techlab.org
Responsible Artificial Intelligence Institute https://www.responsible.ai/
Stanford Institute for Human-Centered Artificial Intelligence https://hai.stanford.edu
Stronger Democracy through Artificial Intelligence https://sd-ai.org
SUPERRR https://superrr.net/en/about
Te Mana Raraunga (Maori Data Sovereignty) https://www.temanararaunga.maori.nz/tutohinga/
Tech Workers Coalition https://techworkerscoalition.org
The Algorithmic Justice League https://www.ajl.org
The Institute for Ethical AI & Machine Learning https://ethical.institute/
Tierra Común https://www.tierracomun.net/en/home/
UNDP Bangladesh Accelerator Lab https://www.undp.org/acceleratorlabs/undp-bangladesh-accelerator-lab
UNICRI Centre for Artificial Intelligence and Robotics https://unicri.org/in_focus/on/unicri_centre_artificial_robotics
Upturn https://www.upturn.org/about/
WEF Centre for the 4th Industrial Revolution Fourth Industrial Revolution | World Economic Forum
Appendix 2. Search Terms
“algorithmic audit”
Ruha Benjamin
Ethical AI organizations
AI organization
non-profit AI orgs
ethical AI organizations
progressive AI organizations
democracy ai organizations
Indigenous data sovereignty
Aboriginal AI organisation
name searches of Indigenous academics - eg, Michael Running Wolf (US), Tamika Worrell (Aus)
TESCREAL
inclusive AI organizations
disability and AI organizations
inclusive AI organizations
ChatGPT
What are some organizations working on ethical AI

