AI Is a Rhetorical System, Not a Reasoning Agent
Rethinking Authorship, Reality, and Responsibility in the Age of Persuasive Algorithms
Generative AI (genAI) is changing the way we produce texts, the idea of authorship and even the concept of a commonly shared reality. It does so by creating persuasive surface structures that often appear indistinguishable from the real — even when they are entirely fabricated. At the Tübingen RHET AI Center, where we began analyzing AI from a rhetorical perspective well before the sudden and dramatic rise of generative models, we quickly realized that rhetoric offers a powerful theory for describing the structures and implications of AI-generated texts.
Rhetoric, after all, has always concerned itself with authorship, persuasion, the social effects of discourse and the construction of knowledge and reality. It views the creation of texts not as the expression of an isolated genius, but as a rule-based process drawing on shared arguments and commonplaces – remarkably similar to the way AI generates language. Moreover, rhetoric’s focus on power, ideology, and ethical responsibility equips us with the tools to critically engage with AI-generated content. We should understand genAI not as artificial reasoning, but as a rhetorical system – one that produces plausible, persuasive outputs. This shift is crucial for how we use, critique, and teach AI.
Rhetoric, Not Reason: A Paradigm Shift in Understanding Generative AI
GenAI systems are frequently mistaken for agents capable of reasoning, as we tend to construe AI in an anthropomorphic way, attributing intention and reason to AI systems. But from a rhetorical perspective, they are better understood as technologies of persuasion. These systems simulate coherence and intention – they do not generate truth or deep understanding, nor do they have a human-like hermeneutic capacity that would allow them to take on the perspective of the other or to revise their own views as a result.
GenAI creates persuasive surfaces because it is trained to meet the criteria of textuality: producing cohesion, coherence, and genre-specific text structures. At their core, these models operate by probabilistically predicting the next word based on prior patterns—they do not "analyze" or "understand" in any human sense. What they produce are compelling rhetorical imitations of reality. In this way, they are pure rhetoric—both in the positive sense (as systems governed by persuasive production rules) and in the negative sense (as fabricators of plausible but potentially deceptive realities). These surfaces must be critically evaluated, not passively consumed.
Persuasive Algorithms: The Illusion of Understanding
What makes genAI output so convincing is not its capacity for comprehension of complex problems, situations, or human agents, but statistical mimicry. What we interpret as fluency, reason, or insight is the product of algorithms trained to imitate human discourse patterns. This creates a paradox: people increasingly turn to genAI for insight into complex social, psychological, or philosophical questions, but the systems are neither capable of empathy nor hermeneutic sensitivity nor reasoning. They imitate understanding and reason without any genuine grasp of historical situations, cultural contexts, or psychological dispositions and attitudes. They replicate perspectives, but without the capacity for psychological depth or critical appraisal. We are confronted with an illusion of understanding that users tend to confuse with humanlike understanding. And yet, despite their lack of genuine understanding, genAI systems often prove useful to human users: their imitative responses can effectively support problem-solving in real-world contexts — not because they understand, but because their outputs can resonate with human expectations and be integrated meaningfully into human-machine interaction.

Authorship Reconsidered: Human-Machine Co-Creation and the Dispersal of Responsibility
GenAI challenges the traditional concept of the singular, autonomous author. Instead, it introduces hybrid authorship—texts co-produced by human prompting and machine output. This shift raises urgent questions about originality, intentionality, and responsibility.
In this age of hybrid authorship, we must ask: Who is responsible for a given text? How do we adapt learning and teaching practices to account for these new forms of authorship, which are both convenient and potentially corrosive to human creativity? Can we still trust the author’s accountability in a world where machines increasingly shape the message?
The consequences are profound—not only in education, but in business, law, and politics. Authorship has long been tied to legal responsibility and economic value. Now we must rethink authorship and its ethical implications in an age where responsibility is distributed between humans and machines.
Generative AI as Techné: The Procedural Rhetoric of Machine Output
From a rhetorical perspective, genAI should be understood as techné—a practical art of production, akin to classical rhetoric—where algorithmic procedures create persuasive text surfaces by imitating and recombining existing discourse.
This view aligns with the rhetorical tradition, where text production is largely rule-based, relying on established arguments and commonplaces rather than radical innovation. What is commonly held and familiar is often more persuasive than what is novel. That’s why commonplaces and recognizable patterns are central to rhetorical theory—and why AI outputs are so compelling. As Kenneth Burke emphasized, in his “Philosophy of Literary Form” the fact that we are used to common arguments as well as textual patterns means that these common arguments and structures allow for identification and thereby are very seductive for human beings in their constant striving for identification and identity.
GenAI reproduces shared arguments and familiar forms, whether true or not. It is this quality that renders its texts both persuasive and effective, yet also potentially dangerous. The ease and speed with which these texts can be generated creates a flood of seemingly authoritative discourse, which can mislead, manipulate, or distort public debate.
Rhetorical AI Literacy
Rhetorical AI literacy expands upon existing digital literacy frameworks by emphasizing reflective judgment, contextual awareness, and ethical responsibility. Users must go beyond simply “operating” AI tools. They must become rhetorically aware interlocutors.
In a media environment saturated with plausible but unverifiable content, we must learn to recognize rhetorical strategies, refine our prompting techniques, and attend to audience, context, and kairos—the opportune moment. The critical tradition of rhetoric teaches us that social reality is not given but constructed through communication. And communication, just as technology, is never neutral; it is always interest-driven and addressed.
At the same time, human users and generative systems are entering into new co-creative relationships – constructing "possible worlds" through iterative interaction. In this model, the AI is not merely a tool, but a rhetorical system that can be guided, prompted, and refined. Such collaboration requires techné (skill), iudicium (critical judgment), and aptum (situational appropriateness). Meaning is not simply generated; it is negotiated.
Ethical Indeterminacy and the Politics of Persuasion
The opacity of algorithmic systems, the diffusion of authorship, and the concentration of power among a few corporations all raise urgent ethical questions. Misinformation, manipulation, and possibly declining public trust in knowledge infrastructures demand our attention as they erode the common ground that is needed in a society as the base for reliable interactions and a clear orientation towards reason and reality.
Who bears responsibility in this new landscape? Is it the user, the developer of the AI system, or even the system itself? We need clear legal frameworks and cannot allow responsibility to dissolve through technological complexity.
Studies have shown that genAI is a powerful tool of persuasion – and therefore a potent political force. Recognizing this is crucial if we are to resist efforts to influence individuals and societies through false information, propaganda, or emotional manipulation. GenAI demands heightened rhetorical awareness and robust AI literacy. It requires critical thinking – and, importantly, it cannot replace it.
Rethinking Rhetoric, rethinking generative AI
Applying rhetorical techné in the context of genAI offers an opportunity to bring classical rhetorical theory into conversation with contemporary challenges, particularly around agency, intention, and responsibility. Rhetoric can help users to employ generative systems deliberately, ethically, and in contextually appropriate ways. This also calls for sustained research into how genAI is reshaping everyday information practices – how users interpret, evaluate, and act upon machine-generated content across diverse social and professional settings. Consequently, genAI is also about rethinking rhetoric, about adopting it to new production routines and new configurations of simulation and reality that emerge as a result of the success of genAI systems.
If we approach generative AI not as an autonomous agent of reason, but as a system of rhetoric, we open up the possibility of continually rethinking rhetoric in light of new communicative contexts and procedures. In doing so, we equip ourselves to use GenAI more responsibly, critique it more effectively, and teach it more meaningfully.
Olaf Kramer and Markus Gottschling are part of the RHET AI Center for Science Communication on Artificial Intelligence at the University of Tübingen. Collaborative publications include the edition “Recontextualized Knowledge. Rhetoric – Situation – Science Communication” (2021) and, most recently “Persuasive Surfaces and Calculating Machines. A Rhetorical Perspective on Artificial Intelligence” (2025).




