Artificial Intelligence (AI) adoption among students seems to be steaming ahead at great speed. Anecdotally, I have noticed many of my friends with no technical or intrinsic interest in the technology telling me about their experiences in finding AI useful. I also see more and more ChatGPT screens open when I walk through Sayles. One of my friends shared with me that, one day, while sitting in the back of their computer science class, they saw a sea of ChatGPT windows open on students’ computers. More concretely, the number of users of the AI services continues to rise.
Unfortunately, in my view, the discourse on AI usage is deeply flawed. Many people do not have a good sense of the capabilities of learning models, don’t recognize the great risk in interacting with AI tools or don’t see how much potential these tools have.
First, I want to stress that AI systems have been steadily improving since at least 2022, when ChatGPT launched. Large language models (LLM) are getting better at doing basically anything we are able to measure. Furthermore, as of now, there doesn’t seem to be any strong reason to think this improvement is about to stop. The most dramatic gains, especially in the last year, have been in mathematics and reasoning-related areas. For instance, AIME, a high school-level math competition, has often been used as a benchmark of mathematical ability. GPT-4o scored 12% in May 2024, GPT-o1 earned 74% in September 2024 and GPT-5 scored 94% in August 2025. Similar improvements have been observed in coding and scientific reasoning.
This summer, Google and OpenAI competed in multiple high-profile coding and math competitions. They generally outcompeted the human participants on problems that we can be fairly sure were not in their model training data, as they were written specifically for the competition. These gains may not be apparent to the AI skeptic, as they are only available in the newer “reasoning” models, which tend to be locked behind the paywalls of AI tools like ChatGPT. If one hears that the new GPT-5 is great at math and tries it on the free version of ChatGPT, they will likely be disappointed and come away with a flawed understanding of state-of-the-art developments in AI.
In short, these models are now extremely adept both at writing and at coding, math problems and extensive automated web searches. Also, rates of hallucination—where AIs make something up from nothing—have decreased tremendously, particularly in systems that are attached to the internet.
With the understanding that current AIs are very powerful, let’s discuss some risks. The largest risk, particularly in an academic setting, is that of offloading one’s critical thinking skills to the machine. I recently heard a story where a student in class responded to their professor’s question using a response they had generated from ChatGPT. I think this story, whether or not it’s exactly true, shows the worst-case scenario with AI. If you give AI every problem you are tasked with solving, you will inevitably degrade your ability to think for yourself. This concern is especially vivid at an institution like Carleton, where the value of thinking is emphasized. There seems to me to be something deeply important about the life of the mind and the ability to think. Allowing AI to take that away does strike me as tragic. I think that, by becoming dependent on AI, a person would live a less rich and interesting life than they could otherwise.The stakes are high with your mind on the line.
If the risks are so great, surely we shouldn’t be using AI. Shouldn’t we just take a minimalist position and try to stay as far away from it as possible? I don’t think this approach is quite right, either. I think AI holds great promise in a number of different applications, if only we take care to use it properly.
First, there are tasks where the means of accomplishing them are not especially important. In these cases, AI can make things easier, faster or more convenient in a way that I don’t think is truly damaging to cognition. For instance, in one of my economics courses, we are explicitly allowed to use AI tools to write R code for running statistical analyses. This makes a lot of sense to me, as learning to write R code is not at all what the course is about. It is simply a means to the end of deeply understanding the statistical and economic theory. Similar claims can be made about finding research materials—where external intelligence such as research librarians are often already employed—or in providing technical editing for writing. (Though I do think that, other than grammar and factual correctness, writing should be mostly protected from AI due to its high association with thinking.)
There is an even more optimistic case, though: AI is the greatest tool that we have ever built for pursuing one’s curiosities.Academic curiosity is always something that’s been encouraged; having a love of learning new things is great. We now have a machine that can help you learn new things and explore new topics much better than ever before. There is an answer for (almost) every question. The two characteristics of AI systems that make them great for this use case are that they have a very wide range of knowledge and they don’t have any concept of impatience, so they can be used to understand lots of different things. Additionally, they never get tired of explaining and answering questions.
Overall, AI appears to have massive potential for improving education. AI has the capabilities of a one-on-one tutor, and it is potentially scalable in a way that human tutors are not.
We live in a time of great uncertainty and technological change, and it is important to keep a clear-eyed view of the capabilities, risks, and benefits of these systems. What I’ve presented here is just a provisional view on the topic at this moment in history, not even touching on the ethical implications of AI. The possibilities for further exploration are vast.














