Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • Events and Insights:
  • Leading in the AI Era
  • Chronicle Festival On Demand
  • Strategic-Leadership Program
Sign In
Illustation incorporating the letters AI into the face of a shrug emoticon
Illustration by The Chronicle

The Post-Plagiarism University

Professors have tried to fit AI into old categories of academic misconduct. Students aren’t buying it.
The Review | Essay
By Clay Shirky
November 3, 2025

When I arrived at college, late last century, I was herded into a room with my peers to be oriented. There we were, keyed up and finding it hard to focus as the dean gamely worked his way through the minutiae of our new lives at college. The entire event was eminently forgettable, as I can confirm because I have forgotten nearly all of it, with the exception of a single sentence.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

When I arrived at college, late last century, I was herded into a room with my peers to be oriented. There we were, keyed up and finding it hard to focus as the dean gamely worked his way through the minutiae of our new lives at college. The entire event was eminently forgettable, as I can confirm because I have forgotten nearly all of it, with the exception of a single sentence.

Midway through his presentation, the dean turned to the topic of plagiarism. He fell dead silent until we did too; then he said with a seriousness he hadn’t used for anything else that day, “This is the crime.” That impressed me. The dean wanted us to know, urgently, that no matter what discipline or department we ultimately chose, taking credit for someone else’s work could get you banished from all of them.

Given the centrality of the prohibition against plagiarism to academic self-conception, it has proved irresistibly tempting to define copying from artificial intelligence as a form of plagiarism. The University of California at Berkeley Law School was one of the first to do this, telling students that AI “never may be employed for a use that would be plagiarism if generative AI were a human or organizational author.” This has been a common approach at many institutions, including mine.

The appeal of this is obvious: If we simply slot AI into existing categories, we do not need to rethink or rework how we approach its use. We simply extend existing categories of violation to cover new cases without much modification. The only two problems with this approach are, first, that generative AI is not a human author, and second, the students know it.

When my staff and I at New York University, where I am vice provost for AI and technology in education, began reviewing the university’s existing academic-integrity policies in light of AI, I was struck by the simplicity and effectiveness of our definition of cheating: “Deceiving a faculty member or other individual who assesses student performance into believing that one’s mastery of a subject or discipline is greater than it is.”

Simple, to the point: a litmus test that cuts through a whole thicket of what-ifs. If a student taking a closed-book exam writes answers on their hand or uses networked glasses with a camera in them, it’s all the same crime.

Plagiarism is not, by this logic, a separate crime from cheating; it is obviously a form of deceiving a faculty member. We treat it as a particularly noxious version, because it has an additional victim. Cheating is a matter between student and professor. Plagiarism is that, and it is also a crime against the person whose work is being copied, a more serious infraction given the centrality of academic credit in our communities.

This distinction — treating plagiarism as a special and worse form of cheating — is the source of much of the gap between faculty and students. Put simply, many students don’t regard using AI as plagiarism in the uncomplicated way many faculty do, in part because copying text from AI is not in fact copying from the work of a particular person. We can call such copying plagiarism all we want, but many students understand such copying to be a victimless crime, which is to say something less serious than plagiarism. (The educational theorist Sarah Eaton calls this mindset “post-plagiarism.”)

This is a contentious matter. Among people who write for a living there are deeply felt expectations that AI companies should not be able to train on existing texts without permission, but as we see, these beliefs have not been supported by legal decisions in the few cases that have gone to trial. For good or ill, copyright law in the United States is treated as a market protection for authors, not a moral right.

Our students, who have nearly universally grown up in an era of abundant access to digital text via search engines, reference works, and vast collections of online content, do not have the historical experience that would lead them to analogize copying automated output to copying the work of an individual creator.

ADVERTISEMENT

Defining copying from generative AI as plagiarism was meant to elevate the seriousness with which we regarded unpermitted AI use: This is the crime. Instead, deliberately treating synthetically produced writing like something created by an individual person is leading some students to take plagiarism less seriously over all. We can tell students to treat generative AI as if it were a human or organizational author all we want, but it isn’t either of those things.

When ChatGPT was introduced in November 2022, it was not obvious that ordinary student use would be so hard to corral. At NYU, we started out telling faculty they could ban the tools for individual assignments or for a whole course. It gradually became apparent that, while faculty could indeed forbid use of AI, they could not prevent it. As one student said to us early on, “If a professor tells me how to use AI, I’ll use it that way, but if they tell me not to use it, I’ll just use it and not tell.”

Many students don’t regard using AI as plagiarism, because copying text from AI is not in fact copying from the work of a particular person.

This is frustrating, of course. Faculty and the administration would like students to do as we say, especially if we are trying to avoid redesigning our assignments and the ways we assess student learning. But over years and through multiple attempts, we have not found any combination of persuasive argument and academic consequences that will dissuade enough students to voluntarily forgo AI use to make the issue manageable.

ADVERTISEMENT

This would be less troubling if there were some reliable ways to check whether students are using AI or not. This strategy, however, has also failed. In 2023, we concluded after some testing that AI detectors were not effective enough for NYU to license them or vouch for their results. This ineffectiveness might seem like nothing more than a technical disappointment, but for the curious case of Turnitin, the widely used plagiarism-detection software, which has had outsized effects on academic culture.

Turnitin transformed pre-AI plagiarism conversations, taking a complex set of faculty judgement calls about individual students and their work and replacing it with an automated process that provided near-certainty about student-copying behavior, rendered as a measurement of similarity to previous texts, and produced without emotion. Turnitin has only been around a few decades — an eyeblink in the history of academic institutions — and even though similarity detection is not a perfect match for plagiarism, in the generation it has been around, faculty have gotten very used to having a tool that dramatically reduces the time and hassle required to manage such accusations.

Many faculty want an AI detector in order to preserve their current practices around assigning and assessing student work. If they can effectively threaten detection of AI use ex post facto, they can still treat student work as a proxy for student learning, and if they can’t, they can’t. And it turns out they can’t.

Faculty who expect students to do as they are told don’t appreciate the degree to which a class is co-created between faculty and students. (They have probably also forgotten a lot of their own behavior as students.) College students have real latitude in choosing which rules they abide by and which they do not. Given this, one possible response to the appearance of AI is to change student culture by asking them more formally to abide by campus rules, via an honor code.

We have not found any combination of persuasive argument and academic consequences that will dissuade enough students to voluntarily forgo AI use.

The idea has intuitive appeal. If calling something plagiarism doesn’t make students take it seriously, and if faculty who would like to forbid use of these tools can’t reliably distinguish the students who comply from the students who don’t, perhaps we can change student culture by making them sign a pledge promising they won’t use AI if we tell them not to.

ADVERTISEMENT

Here again, though, we run into the same problem. Honor codes are a consequence of student compliance, not just a cause. Simply listing things we don’t like, as if an honor code were a software license, will not compel student behavior without their consent.

And in fact, many colleges with honor codes are reconsidering and weakening them in light of AI. Last year, Stanford University lifted its century-long ban on proctoring. More consequentially, it also dropped the obligation for students to report each other for academic misconduct. At the beginning of this year, students at Middlebury, a liberal-arts college in Vermont, voted to remove the “moral obligation” for students to report honor-code violations they witnessed from other students. (The proposal was later rejected by faculty.) The pressure on peer-reporting requirements is in line with the broader change: Students simply don’t regard AI use as a serious academic crime, and certainly not one worth turning in each other for, whatever we want them to think.

Generative AI does not fit well into existing categories or expectations around student use, presenting academics with a set of complicated questions.

Do we treat AI as a nonhuman process, which is what it is? That leads us to one collection of assumptions, including the belief that mechanistic problems should have mechanistic solutions, as Turnitin solved the “cutting and pasting from digital text” problem.

ADVERTISEMENT

Or do we treat AI as something you can converse with, which is also what it is? Because that leads you to a completely different collection of assumptions, since, before 2022, the only kind of thing we knew of that could keep up its end of the conversation was a person.

AI, a nonhuman thing you can nevertheless converse with, does not fit completely into either category; thus, we cannot manage its integration into our institutions without changing our existing policies and intuitions. Our weakened culture of academic integrity — our list of no-noes and outdated expectations — needs to be reimagined, not just updated.

Our fundamental problem is not disobedience. Our problem is that students have to collectively approve of the strictures we are asking them to abide by. When we try to preserve our existing practices without taking AI’s strange new capabilities into account, we take too little notice of our own students’ experiences and expectations. The relationship between faculty and students is like the relationship between a river and its water: In the short term, the river tells the water where to go, but in the long term, the water tells the river where to go.

A version of this article appeared in the November 14, 2025, issue.
Read other items in The AI Issue.
We’d like to hear from you — tell us how The Chronicle has made a difference in your work or helped you stay informed. You can also send feedback about this article or submit a letter to the editor.
Correction (Nov. 4, 2025, 12:20 p.m.): A previous version of this essay stated that Middlebury removed the ‘moral obligation’ for students to report honor-code violations. The essay has been updated to reflect that this proposal from students was voted down by the college's faculty.
Tags
Technology Teaching & Learning Opinion
Share
  • X (formerly Twitter)
  • LinkedIn
  • Facebook
  • Email
About the Author
Clay Shirky
Clay Shirky is vice provost for AI and technology in education at New York University.
ADVERTISEMENT
ADVERTISEMENT

More News

Former Auburn Tigers quarterback Cam Newton looks on from the stands in the first quarter between the Auburn Tigers and the Georgia Bulldogs at Jordan-Hare Stadium on October 11, 2025 in Auburn, Alabama.
'Bright and Shiny Things'
How SEC Universities Won the Enrollment Wars
Illustration of a Gold Seal sticker embossed with President Trump's face
Regulatory Clash
Trump’s Higher-Ed Policy Fight
A bouquet of flowers rests on snow, Sunday, Dec. 14, 2025, on the campus of Brown University not far from where a shooting took place, in Providence, R.I. (AP Photo/Steven Senne)
Campus Safety
No Suspects Named in Brown U. Shooting That Killed 2, Wounded 9
Several hundred protesters marched outside 66 West 12th Street in New York City at a rally against cuts at the New School on December 10, 2025.
Finance & Operations
‘We’re Being DOGE-ed’: Sweeping Buyout Plan Rattles the New School’s Faculty

From The Review

Students protest against the war in Gaza on the anniversary of the Hamas attack on Israel at Columbia University in New York, New York, on Monday, October 7, 2024. One year ago today Hamas breached the wall containing Gaza and attacked Israeli towns and military installations, killing around 1200 Israelis and taking 250 hostages, and sparking a war that has over the last year killed over 40,000 Palestinians and now spilled over into Lebanon. Photographer: Victor J. Blue for The Washington Post via Getty Images
The Review | Opinion
The Fraught Task of Hiring Pro-Zionist Professors
By Jacques Berlinerblau
Photo-based illustration of a Greek bust of a young lady from the House of Dionysos with her face partly covered by a laptop computer and that portion of her face rendered in binary code.
The Review | Essay
A Coup at Carnegie Mellon?
By Sheila Liming, Catherine A. Evans
Vector illustration of a suited man fixing the R, which has fallen, in an archway sign that says "UNIVERSITY."
The Review | Essay
Why Flagships Are Winning
By Ian F. McNeely

Upcoming Events

010825_Cybersmart_Microsoft_Plain-1300x730.png
The Cyber-Smart Campus: Defending Data in the AI Era
Jenzabar_TechInvest_Plain-1300x730.png
Making Wise Tech Investments
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group Subscriptions and Enterprise Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
900 19th Street, N.W., 6th Floor, Washington, D.C. 20006
© 2026 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin