Using AI Without (Really) Cheating
There’s plenty of moral panic about college students using generative AI to write their essays, but many students say they are trying to use these tools in the right way.

In this episode
Nearly three years after ChatGPT first came on the scene, college students are using generative AI to help with myriad tasks. Outlining and brainstorming are a breeze. A tough concept, skimmed over by a professor during a lecture, can probably be explained succinctly by a chatbot. This kind of AI use is happening on college campuses across the country, and much of it wouldn’t be considered unethical. But the line between efficiency and academic dishonesty is blurry, and some experts are concerned that an AI-infused education could essentially rewire students’ brains. So, how do colleges weigh the promise of AI against its much-discussed perils?
To continue reading for FREE, please sign in.
Or subscribe now to read with unlimited access for as low as $10/month.
Don’t have an account? Sign up now.
A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.
In this episode
Nearly three years after ChatGPT first came on the scene, college students are using generative AI to help with myriad tasks. Outlining and brainstorming are a breeze. A tough concept, skimmed over by a professor during a lecture, can probably be explained succinctly by a chatbot. This kind of AI use is happening on college campuses across the country, and much of it wouldn’t be considered unethical. But the line between efficiency and academic dishonesty is blurry, and some experts are concerned that an AI-infused education could essentially rewire students’ brains. So, how do colleges weigh the promise of AI against its much-discussed perils?
Listen
Related Reading
- These Students Use AI a Lot — but Not to Cheat (The Chronicle)
- The Cheating Vibe Shift (College Matters: Apple / Spotify)
- Should College Graduates Be AI Literate? (The Chronicle)
Guest
Beth McMurtrie, senior writer at The Chronicle of Higher Education
Transcript
This transcript was produced using a speech recognition software. It was reviewed by production staff, but may contain errors. Please email us at collegematters@chronicle.com if you have any questions.
Jack Stripling: This is College Matters from the Chronicle
Beth McMurtrie: A lot of young people don’t find the artifacts of a college education particularly relevant to their lives. They don’t understand, in other words, why they’re being asked to do what they do. And so offloading to AI seems reasonable to them, so they can turn their attention to stuff they consider more important.
Jack Stripling: We hear a lot today about how college students are misusing ChatGPT to do their work for them. In a provocative headline this past spring, New York Magazine declared “Everyone is Cheating Their Way Through College.” Here on College Matters, we’ve talked about professors who are demoralized by the levels of academic dishonesty they’re seeing every day. But AI tools aren’t going away. We’re well past the novelty phase of this technology. ChatGpt now shapes the way that many students study and write, and it’s fair to assume it’s changing how they think. Today on the show, we’ll talk with my colleague, Beth McMurtrie, a senior writer at The Chronicle of Higher Education, about how college students are navigating AI’s brave new world, often with little guidance from their professors.
Beth, welcome to College Matters.
Beth McMurtrie:Thanks, Jack. It’s good to be here.
Jack Stripling: You’ve been reporting recently on students who are using AI ethically, or at least trying to do so. What drew you to that idea?
Beth McMurtrie: Well, you know, Jack, as you noted in the intro there, it seems that there’s a story coming out almost every week now about how students are cheating with AI. And it’s gotten to the point where it’s not particularly helpful. It reduces students to caricatures and it’s created, I think this moral panic, like somehow today’s students are a different breed of person, that they’re superficial, they’re cavalier. They’re almost like con artists trying to put one over on their professors. In reality, I think there’s this huge, messy middle: students who are trying to navigate AI in college and they’re doing it sometimes well and sometimes poorly. And that’s what we should be focusing on. We have these AI tools that are absolutely everywhere and students are using them, but they’re getting minimal guidance. Very few educators are talking to students about what these tools are and how they work. And most importantly, how they’re affecting students’ brains. Are you actually learning what you need to be learning when you use AI? And so what I wanted to do was to ask students, how are you using AI and why are you using AI? Is it clear to you where the line is between where it’s helpful and where it is harmful? And what do you want your professors to know?
Jack Stripling:You know, your point about the demonization of students around this topic is a really interesting one. I wanna say I haven’t participated in this, but maybe I have; I think that when new technologies enter into our lives, panic is often the first response. But I’m feeling that shifting, even in my own conversations with faculty members and student advisers and people like that who are now using this technology themselves. It was easy to boogeyman when older folks weren’t using it. But now everybody to some degree is in the soup. What are you hearing from students about how they’re using this technology? When you speak to them — the folks who are trying to do it ethically — what are they doing?
Beth McMurtrie: Yeah, I got some really great, thoughtful responses from students. I heard from more than a hundred of them and I talked to several of them in depth. And I will say it was not a scientific survey, but the first thing that struck me is the fluidity with which students are using a wide range of technology. And I specifically asked them to tell me about all the tech that they’re using, because I wanted to get a sense of the ecosystem that they are studying in, that they were working in. And a number of students described this very layered approach to studying to include AI. So, for example, they use YouTube and TikTok videos for tutorials. They find them very helpful in explaining things concisely in an engaging way. They use apps like Quizlet to create practice tests. They use Grammarly to check their spelling and grammar on essays, on emails, on discussion posts. And of course they use ChatGPT and other generative AI to help them with a whole bunch of things. They use it to organize their notes or to create an outline for a paper. They use it to tutor them, they create study guides, they create lesson plans. Very few students who responded to me said they used AI to do an assignment for them, but what they did say they used it for was to do a better job on the assignment. The second thing that struck me was this emphasis on efficiency and effectiveness. For students, time is a very valuable and limited commodity. Many of them described it as using AI to cut out the fluff or the busy work in their assignments. For example, one student wrote in that she doesn’t have time to go through all of her notes and readings, so she generates a summary and then generates questions on the readings and tests herself on them.
Jack Stripling: Sounds, sounds legit, sounds fair.
Beth McMurtrie: Yeah, and I’ll read you a couple of comments from students. One of them wrote in, “For a lot of us, AI is just another tool, like Grammarly or Google, but way more helpful. It’s something we use to work smarter and to honestly learn more efficiently than if we had to do it on our own.” And another one called it a “convenient resource that helps me feel more secure in submitting assignments.”
The third thing I noticed is that students are using AI to fill in gaps in their learning. Maybe their professor didn’t explain something very well in class or they did really poorly on a practice test in class and they wanted to study on their own and figure out where they went wrong or they couldn’t find the time to go see a professor in office hours so they turned to AI as a tutor instead. And a lot of them are really bothered that professors just see AI as a cheating tool. I talked to this one student, Allison Abeldt at Kansas State University in a bit more detail about this.
Jack Stripling: Yeah, we actually have a clip from your conversation with Allison. Let’s listen to a little bit of that and then we can talk about it.
Allison Abeldt: I think the biggest miscommunication or misconception is that AI is for students to be lazy, and when they use AI they are not learning. And I just … that completely enrages me because that’s not true. It’s not. AI can be used as a tool, and it is used as a tool. And I think a lot of teachers only know surface level what AI is.
Jack Stripling: You know, Beth, when I hear Allison speak there, I hear frustration, but I also hear her saying, I know more about this technology than you do, and there’s a real good use for it. Is that a theme that came up in a lot of your conversations?
Beth McMurtrie: There were a fair number of students who said, I think my professors get what AI is about, but a lot more students said that either their professors take a very hard line on AI or they simply assume that students are using AI for bad purposes.
Jack Stripling: I feel like one of the things professors are probably worried about is that gray area and whether an 18 to 22-year-old knows where the line is between using this as an effective tool and using it in a way that it’s doing your work for you. Do you think that’s part of what’s at play here?
Beth McMurtrie: Yes, absolutely. And I think, I mean, just getting back to the whole cheating dilemma, the cheating problem or the writing, having AI write for you, is a very real problem. And certainly a lot of professors are seeing AI-generated copy, even in the most seemingly innocuous parts of a course, like a personal reflection or a short answer that they were looking for in class. And so I think when they see these little warning signs, they really worry about the larger question of can students exercise the kind of self-control they need to when they have such a powerful tool on their laptops.
Jack Stripling: So I’m curious, you said you heard from about 100 students. It’s not scientific, but that’s a decent sample size for a reported story. Do we have good data yet, though, about how students are using AI and why they’re using it?
Beth McMurtrie: We have some pretty good data. There are a lot of surveys being done on AI use, as you can imagine. I’ll walk through a few of them. All of these are from 2025, because you really absolutely have to look at the most recent data, because things are changing so fast. And I’ll also show you from this data why it’s such a confusing topic to discuss because there’s sometimes contradictory information in there. So this one survey I looked at from the Primary Research Group, it was a survey of over a thousand students, and they found that 65 percent of students have used ChatGPT in the past month. Another one by Titan Partners surveyed 1,500 students and they find that 42 percent of students use generative AI tools daily or weekly. So we can see here a little bit that it’s a smaller percentage of students who are using AI regularly. So just because you’ve used AI doesn’t mean that you were using it on all assignments. But then there’s a third one I was looking at from the Higher Education Policy Institute of more than a thousand undergrads, and they found that 92 percent had used AI in some way, although they didn’t really ask them how often.
So I think this is where we’re starting to get a little bit of the concern that students are using AI all the time, even though some of that data doesn’t really tell us how they use it on a week-to-week basis. So for that, I looked at some data supplied by AI companies. OpenAI ran this report with a survey of about 1,200 college-aged students. And they found that slightly less than half, 44 to 49 percent, use it for the following things: They use it to start papers and projects. They use it to summarize long texts. They used it to brainstorm creative projects. They used to revise writing, and they use it to explore topics. So I’m thinking like a search engine. What was interesting there is that there were at least a dozen more use cases. Academic research, career advice, computer programming, organizing your schedule. So again, you can start to see, oh, students are using this as this all purpose tool in their lives.
Jack Stripling: Yeah. Everything from outlining your assignment to putting together your workout or your meal plan.
Beth McMurtrie: Exactly. But even that doesn’t tell you, I think, the full story, because you could see good use cases there, and you could see bad use cases. So I looked at this report by Anthropic, which runs the Claude large language model. They looked at the prompts that students put into their chat bot, and they found something pretty disturbing. They called it an Inverted Bloom’s Taxonomy and educators...
Jack Stripling: Sounds scary. Don’t know what it is.
Beth McMurtrie: Yes. Inverted and Bloom. So anyway, this taxonomy, picture a triangle and this is how people learn. And at the bottom, it’s like, you’re just memorizing a bunch of stuff. Then as you move up toward the top of the triangle, you’re analyzing, you’re applying what you learn. And at the very top, you are creating new knowledge. So they found that Claude was being used to complete what they called higher order cognitive functions, like creating and analyzing. Claude is like ChatGPT. It’s Anthropic’s version of a Chatbot. So Claude was completing the higher order functions more frequently than it was being used to, say, help students remember or understand something. Even there, Anthropic said, well, it doesn’t preclude students from also doing these higher ordered things. Maybe they’re using the chatbot as kind of a partner in this. So it’s like co-creating a project together, but this is cause for concern.
Of course, the kicker here is that they used all this data to announce a new product called Claude for Education, which includes a learning mode which helps guide students’ reasoning without providing answers. So that to me is just this fascinating stew of, OK, we’re going to start raising the alarm bells. We know that our products are potentially being misused. We can’t say for sure. But we’re gonna introduce a new product to help you, the educator, guide your students more appropriately.
Jack Stripling: So as somebody who thinks a lot about teaching and learning, what is your big takeaway from that?
Beth McMurtrie: My big takeaway is that there are real causes for concern of how students are using AI to take shortcuts. I mean, if this is what Anthropic is finding, we need to take that seriously, and that there’s no putting the genie back in the bottle.
Jack Stripling: Well, captured in these data points you shared is the possibility that there are some students not using this technology at all. Did you talk to any of them?
Beth McMurtrie: I did, I mean, I would call them sort of a silent minority in the sense that I think there’s a small but significant portion of students who really don’t want anything to do with AI. And then beyond that, there’s a larger portion that are still deeply skeptical of it, even as they use it in hopefully very judicious ways. So the Titan Partners survey I mentioned earlier found that 20 percent of students said they had never used AI, which I thought was real interesting. But the Higher Education Policy Institute data put that at 7 percent. So again, we’re not quite sure where reality is.
I got a comment from this student, Ezri Perrin, a biology major at Webster University, who I thought put part of the dynamic together pretty succinctly. I’ll read you what they said. They said, “Students are often fiercely divided on AI, depending on their social circles and beliefs. Artistic students and left-leaning students will likely be more anti-AI, while right-leaning students and technology-focused or business-focused students are more likely to be pro-AI. We are not all uniformly glomming onto AI just because we’re young.”
I don’t know about the political bent of students, but the data does actually back up Perrin’s point that the most heavy users of AI are STEM students, engineering and computer science students specifically, and business students, which kind of complicates the narrative that students are just using it to write their essays.
Jack Stripling: So the hippies are still making their own outlines, is what you’re telling me?
Beth McMurtrie: Yes.
Jack Stripling: Stick around, we’ll be back in a minute.
BREAK
Jack Stripling: So Beth, this world you’re describing is very uncertain. People don’t have great ideas of where the line is, of what’s appropriate and what’s not. Are students getting any good advice about how to use AI in a constructive, ethical way? Or is it just the Wild West out there?
Beth McMurtrie: I think it’s more of the Wild West. You’ll certainly find on most college campuses workshops and online tutorials for people who want to understand how to use AI tools. And some campuses are even licensing these tools and so they’re making them that much more accessible. One of the surveys I was looking at, the one from the Higher Education Policy Institute, said that even though two thirds of students believe it’s essential to be able to understand AI effectively, only 36 percent said they received some sort of support from their institution with their AI skills.
I think the biggest weak spot actually is in particular classes, because AI use is very situational. You’re gonna think about it very differently in an engineering class than you would in a history class, and you’re gonna about it very differently in an introductory class than you will at a pretty advanced course. And of course, we’ve got different rules by different professors. They can still choose how they want students to use it or not. Where this is manifesting itself is, I asked students a question, are you sometimes not sure where the line is with cheating? And a lot of them said, yes. I mean, they get that you can’t just turn in an AI essay. And very few of them said that they used AI in writing, which I thought was interesting. But we all know that’s cheating, right? They know that it’s cheating. It’s the other stuff. It’s like using AI to help you do your assignments or to help do your work. I’ll give you one example that I think is pretty interesting. A grad student in computer science wrote in and they said, for example, they might write a proof, ask ChatGPT to point out the flaws in the logic and revise the proof. Does that count as unauthorized assistance?
Jack Stripling: I can see how this could be confusing for students, the environment you’re describing, but I wonder if it’s isolating too. When the machine has all the answers, why bother asking a classmate for help? Is that something you hear from young people?
Beth McMurtrie: Yeah, that really came through in the responses, the written responses I got from the students. I think it’s isolating in two ways. As you mentioned, it can be self-isolating when students start to feel like they’re over-relying on AI. And again, they are very self-aware. You might not agree with where the line is, but they are aware when they’re starting to seclude themselves more and more from their classmates or professors because it’s just so easy to ask ChatGPT.
There was one student I talked to, Anna Swenson, she’s a senior at Ball State University, and she said she went into college not wanting to use AI, but she started using it because some of her classes were taught pretty poorly. So she had to go back to her dorm room and start using AI to help her figure out these topics that she wasn’t really understanding in class. But she said that what it ended up doing is disengaging her from her classes even more. She doesn’t attend class as much and she sometimes takes shortcuts with her work. Her exact quote was, “I feel like I’m not using my brain as much.” I heard variations on that.
Another way that students can feel isolated is from other students, if they wanna connect and the other students don’t, right? I talked to this one student, Shelby Foster, a sophomore at McMaster University in Canada. And she comes from a really small town and she was really excited to go to a large university, to this more intellectually stimulating environment. She had all these romantic notions of what campus life would be like, you know, chatting in groups with other students. And she didn’t find that. She found like a campus where people kind of avoided each other. And she can’t blame all of that on AI. And I think that is hard to separate out. But she told me about how she approached this one classmate who had said some really intelligent things in class and she wanted to get to know her. And so she asked her if she would be interested in peer-editing each other’s papers. And the student said to her essentially, you know, I’m really busy, but why don’t you just use Grammarly to help you edit your paper? And Foster said you know that’s …
Jack Stripling: Heartbreaking.
Beth McMurtrie: Yeah, she was really sad. She said, that’s somebody who could have been a friend.
Jack Stripling: Well, I had a lot of those experiences in college, but it wasn’t because of ChatGPT.
Beth McMurtrie: They just avoided you on principle?
Jack Stripling: Yeah, I mean, we don’t need to get too deep into this, Beth, but yes, I had some of these experiences.
So, I am curious, there are things that this technology does well. You’ve talked about some of them, outlining, summarizing, taking your notes, making them sensible. I think about myself as a reporter, specifically investigative work, which can be painstaking, involves tons of court documents, involves tons of interviews and notes. One of the things that I always do when I’m launching headlong into an investigative piece is I create timelines. They’re kind of the Bible of the story. It’s everything I put together. The act of doing that familiarizes me with the material at a level that I feel like I have total recall of all of the facts of the case, which I think probably contributes to that higher order thinking. I think one of the concerns that professors probably have is that some of these mundane tasks actually do affect the way we think about things. They actually do enhance our knowledge. They make us capable of making broader connections because we’ve spent so much of the time in sort of the mundane organization and outlining. Is that a legitimate concern?
Beth McMurtrie: Yeah, I think that’s actually a central question around AI use. Educators speak of this process in terms of friction or what they call desirable difficulties. This idea that you just described that to truly comprehend something, to master it, takes time, can be quite challenging. It can be, in some ways, unappealing. It’s hard, right? It’s hard to review your notes. It’s hard to remember the key points of something, read it over and distill it and then piece it all together or hold it together in your brain. You’re learning a new way of thinking about a subject. It’s analogous to say learning a sport or learning how to play an instrument. You make a lot of mistakes and you tire yourself out in the process of mastery. And so they might say in this vein, it’s really important then to read through a whole bunch of stuff, a whole bunch of articles or book chapters, including some that might not be relevant in the end because you might pick up new information or you yourself might decide this is not relevant because, and then you’re creating that mental map. Or it might be useful to stare at that piece of paper or the computer screen for 20 minutes as you think about how you wanna start your essay. Or figuring out on your own where you went wrong in a math problem.
But you talk to students who use AI in all of these contexts and they say, well, what is wrong with getting a summary of something so I can get to the main point faster? What’s wrong with using AI to brainstorm if I’m the one doing the research? And it comes down to these two different ways of thinking about what AI could do to your brain. Opponents would say it’s cognitive offloading. You’re taking the hard work and the material and you’re putting it into the AI. A proponent, somebody who thinks AI is really good or it could be used really wisely, would call that cognitive amplification. You are extending your mind into the AI. You might use it to store information. You might upload a rubric that you use or a series of questions you wanna ask and run AI through that. So you’re getting a machine to kind of be like you, but faster. And all of this is very unsettled. It’s very situational, like I said before.
I will say that probably the more you know about something and the more you care about something, the more likely you are to use AI wisely because then you become the master of that tool. So if a student hasn’t yet mastered a subject, they can’t necessarily critique what it’s producing. They can just kind of look at it and say, looks good to me. And then I think even more significantly, if a student doesn’t value what they’re being asked to do, then that’s a problem, too. They don’t care if they cognitively offload that two-page English paper. And I think that ties into a broader challenge that we’re seeing in education today, and that’s that a lot of young people don’t find the artifacts of a college education particularly relevant to their lives. They don’t understand, in other words, why they’re being asked to do what they do. And so offloading to AI seems reasonable to them, so they can turn their attention to stuff they consider more important.
Jack Stripling: Yeah, we’ve talked on the show before about the sort of transactional nature of college for a lot of these Gen Z students. What’s in it for me? Explain to me. Is that kind of what you’re getting at?
Beth McMurtrie: That’s part of it. I got this great comment from a student at a small liberal arts college, they’re an English and film major. And they said something I thought was pretty insightful. They said, “I think professors do not understand that when students resort to using generative AI, it’s because the writing process has not been made engaging for them. It’s not laziness, like professors have been saying, but a genuine symptom of the fact that students are expected to be rote producers of work instead of their thoughts and ideas being valued. For decades, people have been teaching students to write like machines. We are most comfortable with work as a product when it feels like no human labor has gone into it. So of course, a machine doing the writing for you feels appealing because schoolwork has been drilled into you as an endeavor which barely requires human presence and ideally lacks it.” That’s a pretty powerful statement.
Jack Stripling: It’s bleak.
Beth McMurtrie: But if we tie it back to we know and we’ve talked about how students for at least a generation have been trained to do well on tests — college entry tests, AP tests, and so on. So if you’re teaching to the test and it’s the grade or the score that matters, then what the student is saying makes perfect sense.
Jack Stripling: Yeah, and I can hear both sides of the argument listening to that quote. On one hand, I can hear exactly what you’re saying. This student is saying, we’re sort of a generation that’s been told to think about outcomes, to think about grades, to think about all of these little boxes we have to check to sort of please you people. And you know, the flip side saying, this sounds like somebody who doesn’t enjoy work that’s drudgery, and welcome to the world.
Beth McMurtrie: Yeah, do you get to choose what you decide is drudgery or not and outsource it to AI? I mean, maybe you will once you’re in an office. I don’t know.
Jack Stripling: One of the arguments we hear a lot from people working in higher education is, well, the solution is just move the goalposts further out, identify the things that AI can’t do well and ask your students to do that thing. Is there any legitimacy to that argument?
Beth McMurtrie: Yes and no. I mean, there are definitely some classes where you have to learn to master the basics. And so those classes are moving toward, I should say moving backward in some way, toward in-class assignments and testing and proctoring and all that just to rule out that AI is being used in an inappropriate way because they just have to teach this stuff. But beyond that, some professors are trying to get more creative with what they’re teaching students. I know I talked to this one computer science professor who said he teaches all the basic coding skills in class, so he knows that the students know that. But then he gives them more ambitious projects than he would have outside of class and he has them use AI. But then what he does is he has him keep a log that documents their process of learning and doing the project. So they might write down, here’s what I was thinking, here’s the prompt I put into AI, here’s how AI responded, here’s how that made me think, here’s what I did next. And that’s sort of this process over product type of approach to teaching, which I think a lot of professors would kind of move to that if they could. The problem is that that is very time consuming for professors. Imagine reviewing a portfolio rather than just reading a final paper or grading a final exam. And what do you do in a case where you’re teaching 200 students or you just need to get through a lot of material in the course?
The other aspect of that, of moving the goalposts, is that you have to understand how AI works to be able to navigate with it or around it. And I know there’s a survey by Ithaca last year that found that only 18 percent of professors said that they understood how to apply AI in their teaching. So if you’re going to create more sophisticated assignments for students, you’ve got to start by teaching the teachers how to use AI.
Jack Stripling: Well, and this goes to what’s the relationship between a professor and a student in this brand new age. I think if we look over the last 100 years or so of higher education, that relationship, that student-professor relationship, has always been at the core, no matter how much has changed. If you’re lucky as a student, you might have a faculty member who changes your life. We hear about that, some of us have experienced that. But I wonder what it means to have this seemingly omniscient technology to contend with that’s sort of entering the relationship. Now it’s a throuple, you know, potentially. Is that creating tensions between the professor and the student?
Beth McMurtrie: Yeah, it’s a third person relationship that one of the people doesn’t want there, I guess. But yeah. I mean, obviously what we can see in the immediate aftermath of the existence of these tools is that trust in students is — well, trust is declining. I was gonna say trust in students is declining by professors, but students are also becoming increasingly mistrustful of their professors if they feel like their professors don’t trust them. So yes, there’s this wedge being put in there. And professors, a lot of them don’t wanna be the cheating police, a lot of them don’t like these monitoring tools, the technology, like Turnitin, where they run a paper through this and it says 20 percent of this paper might have been AI written and then the professor’s thinking, oh great. What do I do now? Do I have a conversation with the student? Do I check further? And it’s just this icky feeling that you’re setting up this surveillance state in your classroom. There are more subtle things going on, too, other than the mistrust. I think these are really interesting things to think about. And I’ve talked to some people who are actually researching classroom dynamics and how AI is changing that. So it might mean that a student is less likely to engage in class, like they’re less likely to raise their hand if they have a question or go to office hours, if they can just ask ChatGPT.
We’re seeing tensions between and among students, those who want to use AI even maybe in an unauthorized way in a group project and those who don’t. In terms of the student-professor relationship, there’s an interesting ongoing study coming out of Dublin. I talked to these two researchers at University College Dublin who are looking at classroom dynamics. They are talking to professors, they’re talking to students to better understand how AI changes these relationships. What was interesting to them is when they spoke to faculty members, professors spoke almost exclusively in terms of academic integrity concerns. Whereas, the students really wanted to talk about how they were using AI to learn. And what the researchers came away realizing is that students were using tech out of the sight lines of their professors in a lot of ways. For example, students would say, if a professor threw out a question in class, one of them might put it into ChatGPT, get the answer, put it into a WhatsApp group chat, and then everybody in the class would see what ChatGPT said should be the answer to that question and then somebody might, you know, deliver the answer. Chances are the faculty member had no idea that was happening. They just knew that when they threw the question out to the class, there was probably some silence and then someone might raise their hand. Some professors did notice also that fewer students are asking questions in class. And, again, it’s that they could look down at their phone, plug in any question they might have in the moment and then get the answer from AI. So in a lot of ways it is changing the immediacy of the connections in the classroom.
Jack Stripling: As long as you and I have covered higher education, there has been talk about disruption, dramatic change. This feels like a real one, right? I mean, this really feels like a sea change in where higher education is headed. I do think there’s something more primal going on when we talk about AI in college. And it’s about what it means to be a student and what it means to be a professor. Because historically, a professor has been the keeper and producer and distributor of knowledge. But if ChatGPT has all the answers, I wonder what that means for a professor. What does it mean to be a professor if that’s the case? Is there a bit of an identity crisis going on here? Am I going too far with this, Beth?
Beth McMurtrie: I don’t think you’re going too far with this. I think you could expand that crisis to include all of social media and all of the hot takes and flood of information that students are getting every day. You’re right, the professors are not the keeper of knowledge entirely. They’re probably still the keeper of understanding, which might sound a bit squishy, but here’s where I’m going with that: What we’re seeing on a basic level is that professors are struggling to convince students that it’s important not to take shortcuts in life, that they shouldn’t reduce learning to summaries and grades and quick output, that speed isn’t everything, that understanding is about more than just consuming information. That runs up against the efficiency mindset that we’ve talked about that students have and that frankly that we all have today. I mean, we’re very much hyped up on doing things efficiently. And professors also have these mixed feelings about whether their job is to prepare students for an AI-infused workforce, that’s the message you’re getting that they’re supposed to be doing that. And there are some that say, sure, I’ll teach you about the tech and this is the world you’re entering in. But I think all professors probably believe that the best defense against obsolescence is to develop your brain so that you’re the one in charge and you know how to be the master of whatever AI you might be dealing with. And sometimes that calls for putting it to the side while you do some really complicated thinking. Again, this goes up against what some tech companies and even what some college administrators are saying when they talk about AI literacy, almost strictly in terms of tech savvy.
Having said that, I do think there’s some common ground because when you look at what students want and don’t want from their professors, they tend to draw the line at AI being in charge. They don’t want AI grading and they don’t want AI course design. They think it lessens the value of a degree. And what they do want is the human connection. The Dublin researchers I mentioned earlier, they asked their students, what is the professor for? And they talked about the importance of they want feedback from a person. They want the professor there for storytelling, which to me also means understanding and making sense of the world. Human interaction is still really critically important. Students are young adults looking to make their way in the world, they are struggling with AI, they know it’s a slippery slope. And even if they start sliding down that slope, the professor’s role is still to challenge them, to teach them, to set expectations, and to decide if they are mastering what they need to master in that course.
Jack Stripling: You know, that’s probably a great note to end on, Beth, because I think that one of the things you’ve illuminated here for me is that sometimes we have pitted the professors and the students against each other in this conversation, but actually there’s a lot of commonality here. There’s a sense that understanding is about more than consuming, as you said. Information is different than knowledge, right? And I think we’ve gotten a lot both here, so I really appreciate you talking to us about it, Beth.
Beth McMurtrie: Thanks. It’s been great to be here, Jack.
Jack Stripling: College Matters from the Chronicle is a production of the Chronicle of Higher Education, the nation’s leading independent newsroom covering colleges. If you like the show, please leave us a review or invite a friend to listen. And remember to subscribe on Apple Podcasts, Spotify, or wherever you get your podcasts so that you never miss an episode. You can find an archive of every episode, all of our show notes, and much more at chronicle/collegematters. If you’d like, drop us a note at collegematters@chronicle.com.
We are produced by Rococo Punch. Our Chronicle producer is Fernanda Zamudio-Suarez. Our podcast artwork is by Catrell Thomas. Special thanks to our colleagues Brock Read, Sarah Brown, Carmen Mendoza, Ron Coddington, Joshua Hatch, and all of the people at The Chronicle who make this show possible. I’m Jack Stripling, thanks for listening.













