Lane Davis is an unlikely archetype for the AI-using professor. He doesn’t teach in business, computer science, or any other field being reshaped by artificial intelligence. And while he admits being “a little nerdy with the tech stuff,” he has no interest in living on the cutting edge. Rather, Davis, a religion scholar, turned to ChatGPT for a more-prosaic reason. He needed support.
Davis was three semesters into his first teaching job, at Huntingdon College, a small campus in Alabama, when ChatGPT came out. He had been shouldering five courses a semester, trying to figure out how to convey complex theological concepts to undergraduates. “I was really floundering that first year,” he recalls. His doctoral program had provided little training in teaching, and his cash-strapped college could not offer much professional development.
So he learned as he went. By the time generative AI came on the scene, he had read enough books on teaching to have a “pretty-good sense” of how he was supposed to design an effective course. What he didn’t have was help.
Davis began testing tools like ChatGPT and Claude. He asked AI to help improve his descriptions of, say, the Council of Chalcedon or the concept of Logos, so they would make sense to a typical undergraduate who was not a religion major. He used it to streamline a course-development process he’d adopted, called Understanding by Design, by asking AI whether his assignments were aligned with the learning outcomes for his courses.
AI is an algorithm. It’s not a human being. Only you can catch things. AI cannot.
“I never had it design a course,” he says, but used it to “get an angle on something or get a perspective on something that maybe I overlooked or wasn’t thinking about.”
For all of that help, Davis admits to feeling “kind of weird” about working with AI. The ethical standard he holds on to, he says, is to use it the way he might engage in a conversation with a really good instructor or an expert in a teaching center.
It turns out that Davis isn’t unusual. According to a recent survey by Tyton Partners, about one in four faculty members had used AI to save time in creating more engaging in-class activities or generating quizzes and other assessments. About one in five had used it to create writing assignments or grading rubrics. Another survey, by The Chronicle, found that 52 percent of faculty members had used generative AI to enhance course materials.
Those figures are likely to grow. Whether professors are true believers in the technology’s promises, feel pressured to adapt, or are simply desperate for help, AI tools are becoming commonplace in course design.
That’s fueling competing visions of the future. In one version, AI raises the bar, freeing professors from tedious hours of labor so they can spend more time with students and create engaging courses. In another, AI leads to a breakdown in foundational relationships, chipping away at trust and authenticity as professors and students mediate their interactions through an often-unreliable technology.
Could these starkly different perceptions lead to clashes among professors, students, and administrators? Some think it’s not a matter of if, but when, especially with higher education’s financial future in peril, thanks in part to AI. One recent article highlighted the case of a student who filed a complaint with her university after discovering that her instructor had used AI to generate course materials, seemingly unedited. Why, she wondered, was she paying tuition for this?
Look across college campuses and you can find evidence for any point you’d like to make about generative AI. Professors have used it to design courses and are thrilled with the results. Others tried and abandoned it, feeling disappointed or frustrated. Some AI users, like Davis, find the process “weird” but are willing to engage.
Zrinka Stahuljak considers her case a success story. A professor of comparative literature and French at the University of California at Los Angeles, Stahuljak teaches a medieval-literature course that was long a heavy lift. Many students enroll because it meets a writing requirement, so she has come to expect a less-than-rapt audience for her lectures. And because the subject is the premodern past, she notes, “it’s already remote for the students.”
To explain Thomas More’s Utopia, a 16th-century philosophical work, to the modern reader, Stahuljak says, you have to describe the political, linguistic, and cultural context in which it was written. “By the time you end up explaining all of that, the class is over,” she says.
So with the help of a developer, she created an “AI assisted” textbook, into which she fed her own material, including course notes, slides, and videos. Instead of having to teach much of the important context and content during class time, Stahuljak poured all of that into her book, which she saw, essentially, as an extension of herself. She added useful features, such as interactive checkpoints, with which students could measure their understanding of the material.
That, she says, freed up her time for deeper class discussions. “I felt like I had all the time to answer their questions, to engage them, to do a back and forth,” she says, rather than rushing to provide context and pointers on how to read and write, “and leaving all that work to the TAs.”
Students told her they like to listen to the podcast version of the textbook, created by Google NotebookLM, while walking to and from classes, and then skim the chapter when they get home. She found that more students were speaking in class, and those engaged students were better prepared. More students came to see her during office hours, when previously they would have just sought out their TAs.
While some of her colleagues have expressed interest in her work, Stahuljak sees plenty of faculty skepticism toward AI, particularly among those in the humanities. She thinks fear drives much of it — fear of losing their power and losing their jobs. “They don’t understand,” she says, “that it really enhances and amplifies your professorial power.”
Other professors describe their AI use in similar terms: It works well when it is an effective extension of themselves. Amanda M. Main, a lecturer in the department of management at the University of Central Florida, has found that she can give students more detailed feedback, which in turn helps them perform better.
Main, who is chief innovation officer for UCF’s College of Business, uses AI to help students prepare for a class exercise in her conflict-negotiation course. They turn in their plans two days before the live event, outlining their intended strategies, and she sends back AI-generated feedback on their ideas. (She tells them she works with AI.)
Because she teaches multiple sections, Main says it would be impossible to give students such in-depth guidance on her own. In the past she would lose sleep staying up late and reading through three to five pages of material for each of her 80 students. She decided to see what AI could do for her by uploading their work, which she anonymized, to Copilot, along with detailed instructions, a description of the assignment, and her grading rubric. (UCF uses a licensed version of Copilot that protects student data, she says.)
She reads all of the AI-generated feedback, corrects anything it didn’t catch, and tweaks it before sending it to students, adding a personalized note when she can, “just to give it that last human touch.” Then she grades students — with no AI assistance — on their in-class presentations.
Like Stahuljak, Main has found that students come to class better prepared and have started visiting her more during office hours to talk about what they are learning. She says students’ responses to the AI guidance have been positive. “I don’t see it being able to replace what I do. I don’t see it being able to replace what any instructor does,” she says. “There’s that magic that person brings to the classroom, and if anything, I think we’re going to find a balance.”
The most controversial use of AI right now is, by far, in grading. Professors fear their credibility will be undercut if AI-generated grading goes mainstream. Isn’t that a core part of their job, they ask, and the reason they are paid for their expertise? Aren’t they supposed to evaluate students’ work and guide them toward improvement?
I can see that this is a tidal wave, that there’s absolutely no stopping it. It’s adjust or die.
Yet AI companies have been churning out tools at a rapid clip, promising to help overburdened teachers and professors automate a time-consuming process. According to the Tyton Partners survey, 11 percent of faculty members have used AI to provide personalized feedback, and 10 percent have used it for grading. When Anthropic looked at how higher-education professionals use its chatbot, Claude, it found something similar. Seven percent of the prompts it reviewed focused on grading; nearly half of those analyzed suggest users were delegating the task to AI, rather than using the technology to guide them.
Kerry O’Grady, a senior lecturer in business communications at the University of Massachusetts at Amherst, considers herself an early adopter and has found AI helpful in aspects of course design. But she was less sure it would work when it came to evaluating students’ work. So she ran a small experiment using two of her courses. O’Grady uploaded her grading rubrics and assignment instructions into ChatGPT or Copilot, along with anonymized student work that she had already graded, and asked for qualitative feedback. She chose not to upload her course materials, because she wanted to imitate what she considers a common use case. When faculty members talk about using AI for feedback, she says, they often just focus on uploading the assignment and their grading criteria.
In every test, O’Grady found, the AI feedback was starkly different from how she had graded. In some cases it hallucinated, making up student material, or it pulled from sources O’Grady did not give it. When she tried to refine the results — telling the AI, for example, that a grade was too generous — it would overcorrect.
“If I’m going to spend the time to upload all of my course materials, and then have to double-check AI’s work to make sure it did it correctly, then I am almost doubling my time as an instructor,” she says. “It’s no longer a time saver.”
O’Grady shared her observations with colleagues but worries some may still forge ahead, believing AI will be a useful shortcut. AI can be an aid, she says, but it should never be mistaken for something close to another person. “AI is an algorithm. It’s not a human being,” she says. “Only you can catch things. AI cannot.”
Sometimes the weakness is in the technology: It can’t do what we want it to do. Sometimes, though, AI’s success or failure says more about human nature and how people learn.
Marc Zimmer has one of those stories. A professor at Connecticut College, Zimmer works in computational chemistry, which includes the use of artificial intelligence. So it seemed fitting to try his hand at developing an AI tutor. He uploaded the textbook, his syllabus, and his notes for his introductory-chemistry course. He created guardrails so students couldn’t generate immediate answers to problems. It took a lot of time, as well as financial support from the college, he notes, but it functioned well when he tried it out during the 2024-25 academic year.
The results, though, were mixed. He found that his strongest students had little interest in using the AI tutor, and the weakest were so overwhelmed by the course they couldn’t use it effectively. But for about 30 percent of his students, it made a difference, and they told him so in their course evaluations.
“They started using it and then realized, ‘Well, I don’t have to wait to meet a tutor … I can actually ask this, and if I don’t understand it, then I can go to the tutor,’” he says. The two “weren’t mutually exclusive.”
Zimmer has since abandoned his chatbot. The latest version of ChatGPT offers a study mode, so it didn’t seem worth the effort to maintain it.
As for the AI tutor’s overall value, he’s not sure. He would like to think that it helped students learn how to study better, but he doesn’t know whether it did. And in the end, it represented a “very small part” of his course. “The engagement in class, the engagement with the material,” he says, “is much more important.”
Zimmer has found AI to be useful in other ways, such as generating creative classroom activities. Yet if he could wish away generative AI, he would. Its destructive potential is too great. But “it’s out, so I have to learn how to live with it, and I have to teach students this concept of co-intelligence,” he says, so they can learn how to use AI constructively.
James Brusseau, a professor of philosophy and of computer science at Pace University, has had a similar mixed experience developing an AI tutor in his online business-ethics course. He thought the tutor could help students who fear looking foolish by asking a question, or those who ask but forget the answer by the time they sit down to do their homework.
Working with a developer, Brusseau created an AI chatbot trained on the textbook he wrote for the course and on his recorded lectures. It’s helpful, he says, but not reliable. “It’s never completely wrong. It doesn’t say crazy things, but sometimes its answers are generic and don’t apply to what we’re talking about in the course.”
In the first semester he introduced the chatbot, not many students used it. This semester he requires students to use the bot to help them answer questions, in hopes that they will learn how to work with AI as if it were another person, perhaps a professor. So far he has found that about half aren’t using it. Others use it mindfully, as he had hoped. And still others — as he can see from the chat logs — are trying to get it to do their homework. That maps to how he sees the uptake of AI among students in general.
Brusseau continues to experiment with the technology, but not out of some deep love for AI. While it is great for convenient and shame-free questions, he says, it may also be dragging us down to its level so that we communicate in ways that feel mechanical and empty.
The most he can do is redirect students toward its best possible use. “I can see that this is a tidal wave, that there’s absolutely no stopping it,” Brusseau says. “It’s adjust or die.”
‘Adjust or die’ is hardly an uplifting rallying cry, but it is a common one when it comes to AI. What happens, though, when you’re not sure how much you’re supposed to adjust? Could you become so adept at using AI tools that you work yourself out of a job?
That’s not an idle threat. C. Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities, has spoken at three or four leadership retreats where administrators have discussed AI as a cost-saving measure in teaching. This summer, during one retreat, the first question he received was from a university chancellor who said that faculty members always push back against large-enrollment classes because they are hard to teach. “Doesn’t this now mean,” Watson recalls the college leader asking, “that we can fire all the adjuncts and increase class sizes and require faculty to use AI?”
In a time of financial crisis, Watson said, “AI is absolutely being considered” by administrators as a way to reduce costs. Watson says he reminds college leaders that the greatest predictor of student success is the quality of the professor. AI will be most effective, he tells them, not by allowing colleges to increase class sizes, but by magnifying the impact of the professor. If instructors save 10 hours on course preparation, he says, “take those 10 hours and then reinvest it in our students in some way.”
At Connecticut College, which has made a significant push to bring AI to the forefront of campus conversations, Susan Purrington’s job is to help professors navigate the challenges AI presents and help train those who want to use it mindfully in teaching. Some who use AI tell her they feel enough shame or guilt about using it that they keep it secret from their colleagues.
“I spend a lot of time talking about what it means to be an AI user and trying to take away the power from this technology and give it back to the humans,” says Purrington, the college’s first generative-AI teaching and learning fellow. “We should be transparent with our use, and we should be explaining why we’re choosing to use things for this purpose but not for this other purpose, and helping model that behavior to our students as well.”
She understands the ambivalence many professors feel about AI, because she feels it herself. Some days she loves it and is “ready to shout from the rooftops” about something positive it has done. “Then I turn around and get really mad that these companies even exist who are stealing our information, who are violating copyright laws.”
Watson, who writes and teaches about AI, encourages faculty members to think of it as an assistant. Professors aren’t learners like the students they might discourage from using AI. Instead, they are experts in their fields. In that way, he says, using AI tools to strengthen an assignment or design an activity is no different than, say, talking to a co-worker.
The notion of interacting with AI as if it were a colleague doesn’t sit well with some educators. After all, AI can’t actually think. And it sometimes makes stuff up. Equally important, skeptics say, if professors worry that students could get lost in an AI-echo chamber by relying too heavily on the technology — and conflating its output with their own ideas — couldn’t professors do the same?
Problems with overreliance on AI in the workplace are already starting to appear. A recent report by Harvard Business Review found that managers are complaining that they are being overwhelmed with AI “workslop,” material generated by AI that looks polished but is riddled with errors or not on point. It most frequently comes from peers but also from bosses and direct reports.
I feel like I’m just using it in the same way that I could make use of a really smart colleague who is basically available 24/7 to me.
Kevin Gannon, director of the Center for the Advancement of Faculty Excellence at Queens University of Charlotte, is troubled that some faculty members are turning to AI rather than the teaching experts who already exist on campus. Chances are, he says, talking through your teaching challenges with another person will give you better results than AI could. If there isn’t a teaching center, he says, faculty members can still look to, say, an instructional designer or to each other to create communities of practice.
“Overworked faculty will talk about using it as a thought partner, but it’s really a content generator,” he says of AI. “When we start anthropomorphizing it like that, we get into some places that aren’t very helpful.”
Lindsay Masland, executive director of the Center for Excellence in Teaching and Learning for Student Success at Appalachian State University, was an early AI adopter, but over time her reservations have grown. She doesn’t like the way people have started using it to shape their own voices, such as in writing emails, and says that she sometimes feels like she’s in a “thruple” when AI is involved: It only really works when both people agree to involve AI in the relationship.
As she put it in an email: “Developing meaningful relationships between two people is hard enough, whether it’s professor-student, student-student, professor-professor, but now, we’ve added an additional layer of difficulty and opacity between us. And hey, if the robot thruple works for you and if both humans are consenting adults, then live your best life, but the rest of us are allowed our grief at the intrusion.”
She understands why instructors might turn to AI when they are feeling burned out or overwhelmed with heavy teaching loads. But she worries that they could end up harming themselves in the long run if they come to depend on a tool that ends up stifling their skills and creativity. “It’s a quick fix right now,” she says. “But down the road people might feel like they’re teaching courses that aren’t theirs — because they aren’t, not fully.”
Students can already feel that way about AI-assisted courses. A professor who has a no-AI policy for students, for example, but uses AI to generate assignments without being open about it, is likely to lose student trust.
O’Grady, the UMass professor, recently asked her students — 75 of them across three courses — what they thought about professors using AI in grading. They said they wouldn’t mind if it was a course they didn’t care about. If it was in a course for their major, though, “they would be livid,” she says. They didn’t want to put work into something only to find that the professor was using AI to give them feedback.
Watson has asked students in focus groups about AI in grading. One group in particular seemed to be more positive than the others, he notes, so he asked why. A student who said they were politically conservative thought AI might grade more fairly. Another student noted that a professor of theirs recently told the class she had been grading papers from 7 p.m. to 2 a.m. “And the student said she was thinking, ‘Did you grade my paper at seven, or am I one of the ones you graded at two?’”
Sophia Hendrick, a junior at Appalachian State who works with Masland and other undergraduates in a research group on teaching and student success, described an experience she had with AI during a summer course on religion, taught asynchronously online.
Students watched recorded lectures. Afterward, an AI chatbot created by the instructor would ask questions, and then students would respond. They needed to reach a minimum word count in their chats, but otherwise the goal was to ensure they understood what they were learning. The chatbot might ask students to compare how Jesus was described in different Gospels of the New Testament.
If Hendrick responded incorrectly, she says, the chatbot would ask her to rephrase her answer so that — she assumed — she would get closer to the correct one. But she discovered that it could be intentionally misled — which she did sometimes just out of curiosity — or say something was correct when she knew it wasn’t.
She doubts the instructor read the transcripts since he had 100 students to teach in a rapid-pace, six-week course. “I didn’t get any feedback,” she says, “and all of my grades went in as a 100.”
In some cases, the choice may be between AI-generated feedback and no feedback at all. Main, the lecturer at the University of Central Florida, notes that some first-year courses in the business college enroll more than 1,000 students.
The college plans to pilot an AI tool in some of those courses that will offer feedback to students as they meet virtually in small groups outside of class. The goal in creating these small groups, says Main, is to help students develop connections with each other. The AI tool, designed using Bloom’s taxonomy, will evaluate students’ comprehension of the material as well as their level of engagement, their communication skills, and their teamwork. Instructors will be able to review the AI feedback to see where students are struggling with course materials.
As AI tools become more easily available and easier to use — Canvas now embeds grading tools in its learning-management system, for example — it’s likely that a broader range of instructors will try them. Teaching experts say it’s critical that professors talk openly with colleagues and seek guidance from administrators on appropriate use.
Gannon has argued for bringing the people most critical of AI to the table when setting policy. Those skeptical voices can help deepen understanding of the challenges inherent to the tech, he says, such as the idea that AI will reduce workloads when it can actually take a lot of work to ensure its accuracy. He hopes that people will pursue what he calls critical digital literacy: demystifying AI, and understanding its relationship to us and what it can and can’t do. “That’s where I have seen the conversation gain the most momentum,” he says.
As for Davis, the religion scholar, he’s now a more-seasoned instructor, teaching graduate students at Christian Theological Seminary, an institution where faculty members and students aren’t particularly interested in AI. But he continues to find the tools valuable when he designs his courses. He has asked AI to examine his course materials for anything missing, for example, after he has uploaded his syllabus, assignments, and learning goals. It can help him think of more interesting ways to design an assignment.
He sets clear boundaries. He will never ask a question to which he doesn’t generally know the answer. And he doesn’t use AI for grading.
“I feel like I’m just using it in the same way that I could make use of a really smart colleague who is basically available 24/7 to me,” he says. It’s a line — a fuzzy line at times, he admits. “But it’s where I’ve landed.”