When I arrived at college, late last century, I was herded into a room with my peers to be oriented. There we were, keyed up and finding it hard to focus as the dean gamely worked his way through the minutiae of our new lives at college. The entire event was eminently forgettable, as I can confirm because I have forgotten nearly all of it, with the exception of a single sentence.
Midway through his presentation, the dean turned to the topic of plagiarism. He fell dead silent until we did too; then he said with a seriousness he hadn’t used for anything else that day, “This is the crime.” That impressed me. The dean wanted us to know, urgently, that no matter what discipline or department we ultimately chose, taking credit for someone else’s work could get you banished from all of them.
Given the centrality of the prohibition against plagiarism to academic self-conception, it has proved irresistibly tempting to define copying from artificial intelligence as a form of plagiarism. The University of California at Berkeley Law School was one of the first to do this, telling students that AI “never may be employed for a use that would be plagiarism if generative AI were a human or organizational author.” This has been a common approach at many institutions, including mine.
The appeal of this is obvious: If we simply slot AI into existing categories, we do not need to rethink or rework how we approach its use. We simply extend existing categories of violation to cover new cases without much modification. The only two problems with this approach are, first, that generative AI is not a human author, and second, the students know it.
When my staff and I at New York University, where I am vice provost for AI and technology in education, began reviewing the university’s existing academic-integrity policies in light of AI, I was struck by the simplicity and effectiveness of our definition of cheating: “Deceiving a faculty member or other individual who assesses student performance into believing that one’s mastery of a subject or discipline is greater than it is.”
Simple, to the point: a litmus test that cuts through a whole thicket of what-ifs. If a student taking a closed-book exam writes answers on their hand or uses networked glasses with a camera in them, it’s all the same crime.
Plagiarism is not, by this logic, a separate crime from cheating; it is obviously a form of deceiving a faculty member. We treat it as a particularly noxious version, because it has an additional victim. Cheating is a matter between student and professor. Plagiarism is that, and it is also a crime against the person whose work is being copied, a more serious infraction given the centrality of academic credit in our communities.
This distinction — treating plagiarism as a special and worse form of cheating — is the source of much of the gap between faculty and students. Put simply, many students don’t regard using AI as plagiarism in the uncomplicated way many faculty do, in part because copying text from AI is not in fact copying from the work of a particular person. We can call such copying plagiarism all we want, but many students understand such copying to be a victimless crime, which is to say something less serious than plagiarism. (The educational theorist Sarah Eaton calls this mindset “post-plagiarism.”)
This is a contentious matter. Among people who write for a living there are deeply felt expectations that AI companies should not be able to train on existing texts without permission, but as we see, these beliefs have not been supported by legal decisions in the few cases that have gone to trial. For good or ill, copyright law in the United States is treated as a market protection for authors, not a moral right.
Our students, who have nearly universally grown up in an era of abundant access to digital text via search engines, reference works, and vast collections of online content, do not have the historical experience that would lead them to analogize copying automated output to copying the work of an individual creator.
Defining copying from generative AI as plagiarism was meant to elevate the seriousness with which we regarded unpermitted AI use: This is the crime. Instead, deliberately treating synthetically produced writing like something created by an individual person is leading some students to take plagiarism less seriously over all. We can tell students to treat generative AI as if it were a human or organizational author all we want, but it isn’t either of those things.
When ChatGPT was introduced in November 2022, it was not obvious that ordinary student use would be so hard to corral. At NYU, we started out telling faculty they could ban the tools for individual assignments or for a whole course. It gradually became apparent that, while faculty could indeed forbid use of AI, they could not prevent it. As one student said to us early on, “If a professor tells me how to use AI, I’ll use it that way, but if they tell me not to use it, I’ll just use it and not tell.”
Many students don’t regard using AI as plagiarism, because copying text from AI is not in fact copying from the work of a particular person.
This is frustrating, of course. Faculty and the administration would like students to do as we say, especially if we are trying to avoid redesigning our assignments and the ways we assess student learning. But over years and through multiple attempts, we have not found any combination of persuasive argument and academic consequences that will dissuade enough students to voluntarily forgo AI use to make the issue manageable.
This would be less troubling if there were some reliable ways to check whether students are using AI or not. This strategy, however, has also failed. In 2023, we concluded after some testing that AI detectors were not effective enough for NYU to license them or vouch for their results. This ineffectiveness might seem like nothing more than a technical disappointment, but for the curious case of Turnitin, the widely used plagiarism-detection software, which has had outsized effects on academic culture.
Turnitin transformed pre-AI plagiarism conversations, taking a complex set of faculty judgement calls about individual students and their work and replacing it with an automated process that provided near-certainty about student-copying behavior, rendered as a measurement of similarity to previous texts, and produced without emotion. Turnitin has only been around a few decades — an eyeblink in the history of academic institutions — and even though similarity detection is not a perfect match for plagiarism, in the generation it has been around, faculty have gotten very used to having a tool that dramatically reduces the time and hassle required to manage such accusations.
Many faculty want an AI detector in order to preserve their current practices around assigning and assessing student work. If they can effectively threaten detection of AI use ex post facto, they can still treat student work as a proxy for student learning, and if they can’t, they can’t. And it turns out they can’t.
Faculty who expect students to do as they are told don’t appreciate the degree to which a class is co-created between faculty and students. (They have probably also forgotten a lot of their own behavior as students.) College students have real latitude in choosing which rules they abide by and which they do not. Given this, one possible response to the appearance of AI is to change student culture by asking them more formally to abide by campus rules, via an honor code.
We have not found any combination of persuasive argument and academic consequences that will dissuade enough students to voluntarily forgo AI use.
The idea has intuitive appeal. If calling something plagiarism doesn’t make students take it seriously, and if faculty who would like to forbid use of these tools can’t reliably distinguish the students who comply from the students who don’t, perhaps we can change student culture by making them sign a pledge promising they won’t use AI if we tell them not to.
Here again, though, we run into the same problem. Honor codes are a consequence of student compliance, not just a cause. Simply listing things we don’t like, as if an honor code were a software license, will not compel student behavior without their consent.
And in fact, many colleges with honor codes are reconsidering and weakening them in light of AI. Last year, Stanford University lifted its century-long ban on proctoring. More consequentially, it also dropped the obligation for students to report each other for academic misconduct. At the beginning of this year, students at Middlebury, a liberal-arts college in Vermont, voted to remove the “moral obligation” for students to report honor-code violations they witnessed from other students. (The proposal was later rejected by faculty.) The pressure on peer-reporting requirements is in line with the broader change: Students simply don’t regard AI use as a serious academic crime, and certainly not one worth turning in each other for, whatever we want them to think.
Generative AI does not fit well into existing categories or expectations around student use, presenting academics with a set of complicated questions.
Do we treat AI as a nonhuman process, which is what it is? That leads us to one collection of assumptions, including the belief that mechanistic problems should have mechanistic solutions, as Turnitin solved the “cutting and pasting from digital text” problem.
Or do we treat AI as something you can converse with, which is also what it is? Because that leads you to a completely different collection of assumptions, since, before 2022, the only kind of thing we knew of that could keep up its end of the conversation was a person.
AI, a nonhuman thing you can nevertheless converse with, does not fit completely into either category; thus, we cannot manage its integration into our institutions without changing our existing policies and intuitions. Our weakened culture of academic integrity — our list of no-noes and outdated expectations — needs to be reimagined, not just updated.
Our fundamental problem is not disobedience. Our problem is that students have to collectively approve of the strictures we are asking them to abide by. When we try to preserve our existing practices without taking AI’s strange new capabilities into account, we take too little notice of our own students’ experiences and expectations. The relationship between faculty and students is like the relationship between a river and its water: In the short term, the river tells the water where to go, but in the long term, the water tells the river where to go.