Artificial Intelligence, Universities, and the Public Good: Collapse of the Knowledge Commons
Part 1 of an argument about universities' obligations as stewards of public trust in knowledge
As likely one of the most consequential technologies in human history, artificial intelligence is disrupting social systems in ways that could only be described with some of the most enduring myths and metaphors we know. In the economy, AI resembles a Trojan horse: welcomed as a fascinating innovation, it is opening the gates toward a societal takeover by concentrating wealth and power in the hands of a few unaccountable actors. In society, it reminds us of the Sorcerer’s Apprentice: AI is designed and works in ways that we cannot fully understand, with powerful platforms unleashing/allowing a tsunami of synthetic texts, deepfake images/videos and voices that blur the distinction between reality and falsehood. In the institutional sphere, the sense of “inevitability” about AI is creating situations like that of the Emperor with no clothes, as leaders rush into riskier compromises while those who know better hesitate to tell the truth.
The above metaphors, in fact, only represent the surface manifestations of an underlying tectonic shift: a systematic weakening of trust, trust in institutions, trust in shared standards of evidence, and trust in the practices by which society decides what counts as reliable knowledge. And, nowhere is this erosion of trust more rapidly spreading (and more consequential) than in the university. Universities exist as public institutions precisely because they are entrusted with sustaining the knowledge ecosystem that supports democratic institutions and social progress. Yet, in their rush to integrate, adapt to, or “harness” AI, most universities are making a Faustian bargain: exchanging the slow, demanding system of trustworthy knowledge-making with speed, convenience, and the appearance of mastery over an increasingly shallow intellectual landscape. In Goethe’s tragic drama, a scholar named Faust sells his soul to the devil in exchange for unlimited knowledge and pleasures of the world. Universities are trusting the whims of AI, bargaining public trust that clearly requires resistance.
The university’s Faustian bargain with the AI industry/technology seems attractive: it promises scale without proportional cost, productivity without corresponding labor, and expansion of expertise without the investment needed by lengthy apprenticeship. But in this bargain, universities are trading away ethical and social responsibilities as stewards of public trust in the knowledge ecosystem whose foundations they laid and whose standards they ought to maintain. By allowing reading, writing, interpretation, and expertise to be increasingly delegated to black-box technologies that obscure or unplug accountability for harm, universities are dismantling the foundational structures that have made the knowledge ecosystem reliable and effective for centuries.
Trust and the Public Good
In their most publicly valuable form, universities are society’s underwriters of trust. Trust that claims are grounded in evidence and not in expert-sounding language. Trust that sources can be traced, evaluated, and contested. Trust that methods are transparent, appropriate, and accessible to scrutiny. Trust that authors and institutions are accountable for error and harm. Trust that knowledge is produced for societal benefits (and not just to pad the individual scholars’ resumes). As an instrument for public good, universities serve as stewards of expertise.
Universities earn trust through long and costly processes: years of disciplinary training, sustained mentoring, expert peer critique, methodological rigor, and ethical deliberation. This trust cannot be “scaled” with computational speed or efficiency, without damaging it irreparably. Without the often slow and difficult processes of rigor and the guardrails of quality in place, knowledge that universities produce, teach, and apply loses its public-good value: science blends into junk science as one. The structures of trust exist because knowledge has consequences: medical research can heal or harm, social policy recommendations can lead to progress or poverty, engineering designs can enhance access or aggravate inequities in society. When mere efficiency and expedience destroy the above structures of public trust, the public bears the burden.
Classical economic theory, beginning with the Nobel Laureate Paul Samuelson, defined public goods as non-rival and non-excludable: one person’s benefit does not diminish another’s, and everyone in society can access the benefits of public good. Unlike private goods, for which individuals in capitalistic societies vie for their private gain, public goods are shared assets: clean air, public parks, national defense, and so on. Building on Samuelson’s economic framing, Elinor Ostrom showed that “public good” is also a practice of stewardship, a collective commitment to fairness, sustainability, and shared accountability (including in data governance). Environmental conservation, national parks for future generations, etc, are good examples. As Ostrom’s work on common-pool resources showed, public goods cannot survive and thrive under privatization or public neglect.
In the context of higher education, Simon Marginson and other scholars have robustly fleshed out the concept of public good, arguing that universities should exist not merely to produce private returns, such as credentials and employability, but also sustain public goods in the form of cultural, democratic, and intellectual resources. Societies can and should advance public education as a social or national investment for both social/economic progress and as a mode of collective investment with greater return than private enterprises. Aggravating the practice of the private sector extracting every benefit it can from publicly-funded universities, without having to invest back in them, the AI industry has taken the one-way-traffic of extraction to extreme levels. If universities do not reject this industry’s new framework that obscures sources and methods, and instead embrace or even tolerate it, we are seeing a collapse of universities’ role as stewards of trust, and therefore their ability to preserve and advance the public good.
Corrupting the Commons
With the emergence of the world wide web, information and knowledge became more accessible to the broader public, giving rise to what Yochai Benkler in his book Wealth of Networks (echoing Adam Smith’s Wealth of Nations) called the “networked public sphere.” The technological infrastructure built by visionaries like Tim Berners Lee, Benkler argued, challenged proprietary information and promoted communal ownership and participation in the knowledge ecosystem. When I entered graduate school about twenty years ago, I was excited by ideas like Benkler’s because of the internet’s promise to give the global public unlimited access to a globally contributed information ecosystem – in spite of that access being uneven within countries and communities. The paywalls that were increasingly prohibitive and powerful, the inefficiencies of journal publication, the prejudices inherent in university publication, and the global spread of bean-counting of citations and shadow-chasing of university rankings were just a few reasons that the potential of an open, communal knowledge ecosystem gave me hope.
So, when the likes of Sam Altman more recently started talking up “AI for the public good,” I continued to be mildly optimistic (re: Lee’s and Benkler’s visions). When reading Yuval Noah Harari’s book Nexus (a history of information networks and how the academic knowledge system came to sustain information/knowledge-based social institutions) two years ago, I even imagined that the introduction of AI might help the democratic, self-correcting mechanisms of the academic knowledge landscape to overcome some of the corruption I mentioned above. For example, the disruptions of AI (more than the internet), I hoped, would threaten the perverse incentives for shadow impacts like citation, junk scholarship driven by publish-or-perish regime, weaken the grant-driven pursuit of publication, wake up leaders of public universities against their hollowing out due to privatization, and more.
Unfortunately, AI has exploited most of the corruption, through its design, incentives, and appeals. Reading the next dozen books on AI, learning about university leaders’ policy choices from news reports, and following fellow scholars’ discourses on my social and professional networks have convinced me that universities and scholars are generally neither able nor willing to reverse the AI industry’s corrupting influences. The perverted incentives of an increasingly weakened knowledge ecosystem is making them unbelievably complicit, with some scholars on Facebook groups from around the world sharing openly about how they instruct ChatGPT to write their articles, how they avoid getting in trouble, how they generate entire literature reviews without finding or reading any sources themselves, and how they use AI generated patient profiles (more easily than going into “minority communities”). I didn’t expect AI’s interests to be very aligned with those of the public, but I was not ready to see how fast universities and scholars would get on the bandwagon.
The above are the kinds of reasons why even the “godfather of AI,” Geoffrey Hinton, has made it his retirement mission to warn people and governments about the threats of AI. Five alarm bells are going off both in mainstream media and among AI experts and (strangely fewer) academic scholars. But AI companies, mostly run by a few rather megalomaniac-sounding men, pay no attention. It will take an enormous amount of concerted push back from universities around the world to hold the AI industry accountable for its corrupting influences on the knowledge ecosystem. To prevent the knowledge commons built over centuries from collapsing, university leaders should urgently prioritize the public interest and lead to update the structures toward safeguarding the foundations of the academic/scientific research mission.
Preying on Problems
To see more clearly what is increasingly at risk, we must anchor our discourse about AI’s impacts on the knowledge ecosystem against what universities have historically done (especially where AI claims to improve or replace). The AI industry seems to view universities as content creating platforms whose content they expect to harness for its own benefits. Universities are institutions that developed methods for generating, evaluating, sharing, and applying knowledge in reliable and accountable ways: researchers developed ways to earn trust through transparency, replicability, traceability, and expert review. In fact, as Harari discusses in Nexus, the best part of the university-led academic/scientific knowledge ecosystem is that it is self-correcting: scientists operate within an established system of peer scrutiny, replication, institutional norms, and organized dissent that expose error and reward correction over time.
The systems of rigor and accountability in the academic knowledge ecology have been eroding for some time, before the arrival of generative AI. Publish-or-perish incentives are increasingly misaligned with the social purpose of research and publication; global ranking regimes push entire universities into absurd competitions based on metrics that are not grounded in the community’s or country’s needs; and evaluation systems based on proxy measures like the above increasingly encourage quantity over quality and speed over rigor. Mainstreaming of AI intensifies these corruptions in higher education by accelerating output while bypassing self-correcting mechanisms that have given research credibility. Instead of exposing those corruptions, toward strengthening the knowledge ecosystem, AI is aggravating them, undermining public trust in scholarship. It is ramping up the political cynicism, misaligned academic incentives, and perceptions of elitism or social detachment, making it increasingly difficult to distinguish genuine research from mere synthetic text. AI is making scholars and universities willing partners in a naked game of convenience.
Teaching/learning also earns trust through hard work. Good teaching is not mere exposure of information to students. An effective education inducts students into disciplinary ways of knowing and doing things. Students learn how to develop ideas, grapple with uncertainty, situate claims within complex conversations, and take responsibility for what they learn to assert. This educational mission was being steadily eroded by the skyrocketing cost of public education, the casualization of academic labor, and the reduction of learning to task completion. AI also preys upon these by offering shortcuts for reading, writing, and dozens of other skills that education cultivates in students as lifelong foundations.
The traditional knowledge ecosystem was far from perfect. It enabled colonial extraction, epistemic exclusion, representational harm, and unequal gatekeeping. But the right response to these failures would be to correct/improve established practices: greater transparency, broader inclusion, stronger accountability. AI instead offers snake-oil solutions, utterly disregarding the structural foundations of public trust and the public good underlying the academic research ecosystem. It has scaled all the corruptions of the past and added alarming new dimensions.
The promises of an AI-driven knowledge ecosystem are in contradiction to the point of absurdity. AI defines basic terminology of academic work in absurdly invalid ways: research design doesn’t need to begin with the researcher wanting to do something for real world or with the discourse community, research questions don’t need to develop with the researcher’s own observation of phenomena or current knowledge, literature reviews don’t involve the researcher locating relevant sources or grappling through the intellectual process of connecting dots or finding gaps or coming to aha moments or an intellectual position, reading the scholarship found doesn’t require annotation to question and challenge or apply/advance current knowledge to something new, data collection and analysis in the real world is discouraged because of the ease and speed of AI-found or synthesized “evidence” to work with, and so on.
Among its false hopes and faulty premises, the AI industry touts access while ignoring origin, promises scale while damaging the foundations of the knowledge ecosystem. It is a powerful economic and political force that is masquerading as a neutral technology. “At a fundamental level,” says Kate Crawford in her book Atlas of AI, “AI is technical and social practices, institutions and infrastructures, politics and culture. Computational reason and embodied work are deeply interlinked: AI systems both reflect and produce social relationships and understandings of the world” (8). The pretension of mechanical neutrality is masking the knowledge politics of AI. The appearance of expert knowledge is replacing knowledge grounded in reality, with black-box architectures obscuring how the “research tools” work. Training data are never disclosed and the models are poorly understood, as well as hidden from expert scrutiny. As a result, source verification is becoming increasingly difficult or impossible. Responsibility for error and harm is increasingly normalized and ignored.
From Gary Marcus and Ernest Davis (Rebooting AI), who analyze AI’s lack of understanding, to Arvind Narayanan and Sayash Kapoor (AI Snake Oil), who distinguish hype from potentials of AI, to Kate Crawford (Atlas of AI), who maps AI’s extractive infrastructures, a wide range of scholars, including those with substantial backgrounds in the tech industry have been pointing out the same foundational issues: AI systems are not designed to do knowledge work adequately, their greed and politics are too dangerous for this work, and the industry’s rhetoric of inevitability and urgency (“innovation or else”) is a serious moral hazard to society. Here is an annotated bibliography of books I’ve been reading (with a few that I’m yet to read).
Exhibit A: How Trust Must Work
The potential for the collapse of structures of trust in academic scholarship can be best illustrated by contrasting basic information literacy standards (or student learning outcomes) with how the AI industry treats knowledge work. As you can see in this class handout, developed for the SUNY AI Fellows for the Public Good program, even first-year college courses that teach information literacy (IL) skills operationalize the academic knowledge ecosystem’s foundational principles by anchoring knowledge, skills, and responsibility in the student/scholar. In my college writing courses, which expect or help meet the IL skills outlined in the handout, I teach students how to locate information by identifying the right sources, using the appropriate tools, and developing adequate skills and methods relative to the learning goal and intellectual purpose of the assignment.
Imagine a biology student who wants to understand how the invasive glossy buckthorn is ravaging eastern white pine on Long Island. In terms of source of information, the emerging botanist shouldn’t go to New York City, a university library or a local archive (unless someone has saved information about this issue there), or the internet (again, unless the issue has been studied, the findings published and digitized, and the information findable by a search tool). But let’s say that all of the above have already happened, and the botanist wants to “review” available information on the glossy buckthorn problem on Long Island, toward the next step of developing solutions. She shouldn’t just ask ChatGPT or Perplexity or Scite to “review the literature on the impact of glossy buckthorn on Long Island’s white pine population”; doing so not only assumes that the tool she’s using has all the “skills” needed at the level of expertise needed, it also assumes that the AI tool has access to the information she’s looking for is available on the open web or in one of the few databases that the tool has access to. Reliable information on the issue may be behind paywalls that AI companies aren’t bothering to pay for, because they’re doing fine with all the information they can scrape or steal via the open web. Most scientists are smart enough to not go to the wrong sources or to look for not-yet-existing knowledge on the subject of their expertise; unfortunately, many seem to need far more AI literacy than they currently do. For instance, I have come across more scholars than I can believe who seem to assume or explicitly claim that AI tools have access to all academic databases!
Then there is the second domain of information literacy: evaluating information. As you can see from the handout, I help my students verify sources, distinguish between search and generation, track prompts and sessions, and recognize when AI tools are inappropriate for the task at hand. I help them evaluate information by judging authority, bias, and validity – things that scholars using AI must relearn to do. They must ask who made a knowledge claim, through what methods, and for what purposes. They learn to doubt polished language and confident tone, distinguishing it from expertise/authority (which may come in writing that is less polished). These are all markers of “trust” in the established knowledge system, done at the granular level even by novices of the ecosystem of expertise. To address AI’s disruptions of the current knowledge ecosystem, or, rather its attempt to irresponsibly rewrite all rules, it is clear that, alongside students, we as scholars need to back up and update our own information literacy skills.
The third and, for scholars, far more consequential domain of IL skills is the responsible use, creation, and sharing of information/knowledge. In my college writing courses, I teach students how to first state their position/thesis or support, then introduce the author to establish their credibility, then reveal the context of the source along with an overview, and only then to quote or paraphrase a relevant piece of information from the source. I call merely “quoting and running” an intellectual crime, echoing the phrase “hit and run” as a traffic crime. I ask students to recreate a “conversation,” on their terms, where the reader can see how they/students are advancing new knowledge on the issue, at least for themselves. Imagine medical researchers using AI tools to not just quickly plug in a lit review into their journal articles (instead of thoughtfully situating their research in a meaningful context to establish exigency of the research question, method, or findings) but also adding AI-generated patient profiles, using AI for analyzing clinical data without adequate human verification, or interpreting findings without having fully understood earlier parts or contents of the research they claim ownership for. Students in my first-year writing class may only fool themselves by becoming AI-dependent in locating, evaluating, or using current knowledge; the medical researchers would kill or harm a lot of people in society when they become AI dependent.
When students use AI tools, they must reveal to the reader what tool they used and how and why, showing how they reduced cognitive offloading, considered potential downstream harms, and recognize that accountability rests with them as the users, not the tool. The AI-promoted knowledge ecosystem, by contrast, systematically undermines each one: source traceability is obscured, authority evaluation is flattened, accountability is displaced, error correction becomes opaque, and responsibility for harm becomes unassignable.
The academic practices of revealing methods and tools, sources and contexts constitute a culture of and foundation for intellectual transparency and accountability. They are norms/expectations developed over centuries for intellectual positioning, locating oneself within an ongoing conversation, acknowledging credit, establishing exigence, and assuming responsibility. So, generative AI’s promise, for instance, to “deliver” instant literature reviews makes a mockery of this culture, unplugging intellectual responsibility and agency from their ethical foundations. It is not just hallucinated sources, pretended authority, and junk content scraped from random places on the internet that makes the AI developer’s notion of the prompt-and-produce “literature review” so shocking; it is their brazen claim that they are qualified to create tools for advancing “scholarship” (here is a conversation I had with ChatGPT5 after it kept promoting OpenAI’s “ScholarAI” constantly).
Stay tuned for next week’s post, “Universities and AI: Rebuilding the Commons”
Shyam Sharma is Professor of Writing and Rhetoric at Stony Brook University (SUNY) and a member of the RhetAI Coalition.




Key phrases
"not only assumes that the tool she’s using has all the “skills” needed at the level of expertise needed, it also assumes that the AI tool has access to the information she’s looking for is available on the open web or in one of the few databases that the tool has access to."
- Provenance - The records or documents authenticating such an object or the history of its ownership. AKA chain of truth in this context.
"recognize that accountability rests with them [students] as the users, not the tool"
- ... and recognize whatever accountability risks they are taking on.