ChatGPT, an artificial intelligence chatbot developed by Open AI, was launched in November of 2022. In the past few months, the tool’s newfound prominence has sparked discourse around the role of artificial intelligence (AI) tools, including ChatGPT, in college classrooms. As Beth McMurtrie wrote in “The Chronicle of Higher Education,” “Higher education, rarely quick about anything, is still trying to comprehend the scope of its likely impact on teaching — and how it should respond.”
While its meteoric rise happened almost entirely over Winter Break, there have been conversations amongst Carleton faculty about the ways in which ChatGPT and AI should — or should not — be integrated into Carleton’s teaching and curriculum. For instance, earlier this term, the Learning and Teaching Center hosted a lunch titled “The Teaching Toolbox: Engaging ChatGPT in the Classroom,” which was conducted with a panel of faculty members.
It is important to note that Carleton’s current academic policies do not dictate any official bounds of acceptable or unacceptable use of ChatGPT — or for any technology, for that matter. “A good blanket policy. . . is pretty difficult to formulate. . . There are some cases in which [a technology is] a central component to a class or an assignment, and some in which it’s rightly forbidden completely,” David Liben-Nowell, the Associate Provost of the College, explained. “For example: Is a spell checker or grammar checker appropriate to use? Maybe in some assignments in an early language course it should be absolutely forbidden, but in a comps paper it should be absolutely mandatory.”
Currently, Carleton’s primary online resource containing guidelines around ChatGPT and AI use is found on the Writing Across the Curriculum (WAC) website. The website lists four main pages — “AI Course Policies,” “AI-Resistant Assignments,” “Incorporating AI Tools into Your Courses” and “Ethical Issues in AI.” These pages outlined various pieces of advice for faculty looking to facilitate productive use of ChatGPT within their academic teaching and assignments.
George Cusack, the Director of WAC and a Senior Lecturer in English, pointed out the fast-changing nature of — and subsequent uncertainty around — ChatGPT. “It’s. . . difficult to design lessons when the technology and the industry around it is moving so quickly,” Professor Cusack commented. “For example, ChatGPT is free for everyone right now, but it won’t be forever. . . Any assignment we create using it this term might be impossible for students to complete in the fall if they can’t afford a subscription.”
Nevertheless, some faculty have integrated ChatGPT into their teaching. Mitchell Campbell, Visiting Assistant Professor of Psychology, has used the tool to “show students how to use ChatGPT to debug R code,” for example. Mike Nishizaki, Assistant Professor of Biology, has also used the tool across “all levels” of his teaching, which included BIOL 126, the biology department’s introductory course; BIOL 363, a higher-level biology seminar; as well as senior comps. Nishizaki stated that he finds ChatGPT to be “useful, actually, as a teaching tool,” because “it forces the student to critically assess the tool itself. . . When ChatGPT gives an answer, what’s good about that answer? Why is it good or bad? Why is it failing?”
Some faculty members expressed that they regularly used ChatGPT in their own daily lives outside of teaching. For example, Professor Alfred Montero, Professor of Political Science and Director of Public Policy, stated, “I use ChatGPT every day and about as often as I use Google. . . I ask it for solutions to various questions, to get its opinion on logical issues and to produce some very aggregate data to help me answer questions.” Nishizaki’s initial experience with the tool was what inspired him to incorporate it into his teaching. “When it came out during the Winter Break, I tried it, and it helped me write much quicker than I used to,” he recounted. “I thought that if I’m going to use it, I should [allow it] for students too.”
Faculty members, however, also emphasized ChatGPT’s limitations, many of which originate from ChatGPT’s existing data sources. Jonathan McKinney, Visiting Assistant Professor of Cognitive Science, pointed out that “AI programs depend on human-generated data. . . Without us writing B- papers with overgeneralized points and that data being harvested and used, there would be no ChatGPT.” Nishizaki also mentioned that “in biology, we recognize that our research comes from a subset of researchers that isn’t very diverse. We have to be mindful that [ChatGPT] is pulling from a flawed set of information.” Furthermore, Montero pointed out that “Most of the research in peer-reviewed journal articles and books are behind paywalls and protected by copyright. ChatGPT doesn’t have access to most of it for now, but that will change.”
Sometimes these flawed data sources can lead to lackluster results or complete misinformation. McKinney mentioned a “particularly troubling output [that] some refer to as ‘hallucinations,’ where programs make up citations that don’t exist.” Campbell recounted a personal experience that portrayed this exact phenomenon. He noted, “[ChatGPT] generated multiple titles for articles from my grad[uate] school advisor that were very plausible but completely fabricated.”
Another limitation of ChatGPT is that the tool could impede student learning when it is overutilized. “In a college class, the final product is only valuable as a record of the intellectual work you did to complete it,” Cusack remarked. “Even if you can produce solid writing with ChatGPT that will get you a decent grade — which, again, you generally can’t — doing this doesn’t really accomplish anything, because you won’t learn anything that way.” Campbell expressed a similar sentiment, stating, “I think [ChatGPT is] potentially an extremely helpful and powerful tool that can aid student learning, but it can also definitely be used as a replacement for learning.”
In spite of those limitations, Carleton has seen an increased usage of ChatGPT, though each student uses it at varying levels within their academic lives. “I don’t really have personal experience [with ChatGPT], but [the phrase] ‘just use ChatGPT’ is kind of thrown around on campus,” said Leo Xiao ’25, a biology major. “I’ve seen other people use it, like copy-pasting a text and asking ChatGPT to help summarize it.”
Breanna Lefevers-Scott ’25, a computer science major, pointed out the convenience of using the tool alongside textbook readings. “Some of my reading assignments are straight from textbooks which are hard to read on their own. . . So throwing a word in ChatGPT to explain this or define this really helps with understanding what I’m reading,” Lefevers-Scott explained. “Just having it explain something to me instead of traversing Google. . . [is] a perk. . . it tends to explain concepts well and provide some examples.”
Other students also described ways in which they utilized ChatGPT in idea generation for writing assignments. An anonymous economics major described utilizing ChatGPT in their writing process: “I do often give it my short essay/reflection prompts to see what it can come up with,” the student commented. “Facing a blank page is extremely difficult, while being critical of [an existing] body of text is relatively straightforward, so ChatGPT has been helpful for me in that way.”
The student also articulated that incorporating sections of ChatGPT’s generated responses may feel substantively different compared to utilizing content from traditional sources. “Although I very rarely do it, it doesn’t feel like plagiarism to copy and paste and rearrange some of [ChatGPT’s] sentences directly into what I’m working on,” the student explained. “It would never feel acceptable to copy and paste directly in the same way with websites or research papers. I think it has to do with the fact that I’m not stealing from someone else, like, there’s no person that’s being ‘wronged.’
Students, however, are also aware that the quality of ChatGPT’s responses is not always up to par. For example, Will Brewster ‘23, an art history major — who also published a Viewpoint piece named “Education and ChatGPT” earlier this year — described the lackluster answers that ChatGPT provides. “For the most part, I insert prompts that I have already completed to see what I may have missed; yet, as a humanities major, I find most of the responses to be pretty generic and unthoughtful,” Brewster stated. “[For example,] ChatGPT is able to offer a summary about a specific art period or a relationship between two artists, yet it is quite unable to contemplate primary sources and understand their context and power in shaping the world.”
Misinformation is also a significant concern for students. “I went to a high school that had an immense emphasis on the importance of citing sources, so I think one of the things I’m most wary about is ChatGPT’s inability to cite its sources,” the anonymous economics student noted. “It seems like one of the biggest pitfalls ChatGPT has is people may blindly believe the nonsense it spews — and if you’ve used ChatGPT, you’ll know it is capable of spewing a lot of nonsense.”
For example, an anonymous mathematics major also reported that ChatGPT’s mathematical abilities require much more improvement. “When I tried to ask it to solve my 200-level class problems, it almost never got them completely,” the student explained. “It’d state the most complicated calculus idea, then proceed to make some elementary algebraic mistake right afterward. When I pointed it out, it’d say sorry and then reply with something different but still incorrect — it continued cycling through a couple possible responses.”
Ultimately, both students and faculty recognized the importance for Carleton to directly confront the changes caused by ChatGPT’s development. “I believe that humans will not be able to regulate, ban or otherwise control this technology. . . [and] it is not just every decade that a world-changing technology reaches a point of diffusion and utility that it promises to change capitalism, politics and education,” Montero commented. “We should all assume that AI will be part of the toolkit of every Carleton student in the future, so it better be taken seriously by every Carleton professor sooner or later.”
“As faculty, I think we’ll need to do a lot of work in the coming months to better understand how this kind of technology works as well as its strengths and limitations,” Campbell added. “I think it would do a great service to Carleton students to actively teach them how to use technology like ChatGPT in ways that are helpful, productive and even generative!”
Lefevers-Scott points to the fact that the learning process would require students and professors to communicate and be transparent with expectations. “I think that [ChatGPT] should be acknowledged in academics and encouraged to be used outside of classes in a way that makes sense to each class and professor’s preference.,” she explained. “I would imagine that it would take some trust of the professors in their students to use it right.”
Of course, the potential downsides and negative effects of the technology require continued conversations. “Carleton should create spaces to talk about these issues [with ChatGPT],” McKinney emphasized. “We have a top-of-the-line computer science department, so let’s bring in theorists to talk about the dangers and uses of this technology.” For example, McKinney recommended “the works of AI ethicists, like Timnit Gebru, Abeba Birhane and Ruha Benjamin. They are years ahead of us, and their work will be essential for us as we learn to live in a world with machine learning and algorithmic tools like ChatGPT.”
Brewster also pointed out the possibility that students may become less inclined to engage fully with their writing due to the tool’s capabilities. “I think the college has to recognize that the true danger in ChatGPT lies not in forgery, but rather a sign that students may no longer enjoy writing papers, digging through primary and secondary sources, asking their professors for help, etc.,” Brewster explained. “Colleges ought to find the best way to get students excited again about running the gears within their own heads to produce an examined work.”
Cusack ultimately recommended that students “use technology when it’s easily accessible and can help them with work that isn’t directly related to the intellectual task of their assignment. . . I’ve read about several creative uses for ChatGPT that. . . make the process of academic writing easier but still require the student to do the important work that they’re meant to learn from.” Cusack noted, however, the importance of “check[ing] with your instructors before you use an AI tool to help you with a given assignment.”
Regardless of whether faculty members ultimately permit the use of ChatGPT in their courses, the technology is bound to alter the ways in which instructors design their curricula. “If anything, I think that fear of cheating will prompt more professors to compose assignments that develop students’ higher-order metacognitive abilities,” Montero stated. “I also hope to see more emphasis on oral presentations and extemporaneous demonstrations of what students know.”
Finally, Nishizaki reiterated the unique and indispensable role of the student in a world with artificial intelligence. “Where we’ve come to in our discussions is [the importance of] coming up with good questions for experiments,” Nishizaki articulated. “[ChatGPT] can tell you what [experiments] you can do to confirm what’s [already] out there. We need someone to come up with a crazy experiment that isn’t really recommended, but that’s going to break through all of our dogma.”
Overall, Carleton’s faculty and students will reckon with ever-changing implications, both positive and negative, of AI’s rapidly growing role in every corner of our lives.