I had never really wanted to write about ChatGPT — or consider it for that matter — when I came across the novel algorithm in December. Nonetheless, I submitted a request for the algorithm to write me a poem about some complex social issue of our day, akin to modernist poets in the early 20th century, and found the poem well structured but agonizingly boring and rudimentary.
I did not really think about the algorithm again, until I came across a blunt article published in the “New York Times” concerning the role that ChatGPT ought to play in K-12 institutions of learning. The article argues that, like all prior technologies bestowed to us, we ought to not ban the novel tool and instead embrace its abilities so that we are better prepared for the next forms of technology to arrive. The article showed a crude understanding of the ways in which new technologies affect culture, politics and learning. It also reduced teachers to mere physical conduits of information. This frustrated not just me, but also the hundreds of people who also came across this article and voiced their disagreements in the comments section.
This article blatantly follows in the footsteps of technological determinists who suppose that human beings are at the bend and mercy of a great wave that never can be stopped and, more importantly, ought to be surrendered to. The author argues that “tools like ChatGPT aren’t going anywhere…this particular form of machine intelligence is now a fixture of our society,” and that “because today’s students will graduate into a world full of generative A.I. programs,” schools should not ban but “thoughtfully embrace” the algorithm. Notwithstanding the fact that a “thoughtful embrace” of something is almost contradictory in that you cannot rightly critique an object you take wholly and willingly, this deterministic thinking also reduces technology to simply being accepted or neglected at the time of its inception, when the paradigm should be more of a spectrum across time. New ways of accomplishing tasks that have existed for centuries should be treated as such, unprecedented forces that require a great deal of analysis that takes a great deal of time.
Moreover, a lot of these deterministic arguments succumb to the error of forgetting that technologies do not spring about in a one-dimensional void. Instead they are introduced into a multi-dimensional world that these teachers already live in. ChatGPT is arriving at a time when public school teachers are awash in COVID restrictions, completely uprooting their lesson plans and philosophies and enduring low wages while their schools face budget cuts, further hindering the capabilities of teachers. While a solution to ‘embrace’ the technology as a teaching aid may work in private schools with the capital and resources needed to keep an eye on the ramifications, for the much larger portion of schools in the United States, this ‘embrace’ would go almost entirely unmediated to the detriment of its students.
And one may argue that this is a blessing: by allowing an incredibly complex algorithm to step in when teachers are stressed to the core from economic, social and political problems, students may receive an education they otherwise wouldn’t.This, however, is an incredible error; for the algorithm, in the end, carries with it a bias about how the world functions, and, ultimately, algorithms today are more reflective of the peoples and ethos of these peoples than an all-knowing being that dominant tech-enthusiasts love to tell themselves exists. Try asking ChatGPT about the Israel-Palestine issue and see for yourself whether or not ChatGPT is biased by its creators and a particular western, liberal sentiment.
ChatGPT is more similar to a search engine than an experienced teacher who has followed the routines and thought-patterns of their students, and if we assume ChatGPT’s authority on issues ranks at the level of an educator, or worse, above an educator, we are royally screwing our students. While I am less concerned about and am even cognizant of, ChatGPT’s authority on certain scientific principles and problems that may help students in STEM, I worry deeply about teaching children that AI does not contain a human-bone, and, that while a future world may contain new AI programs, these programs are not complex enough to rise above and make sense of the sheer intricacy and seeming randomness of human beings and our institutions.
Moreover, another folly of this article is its oversimplified understanding of learning. When the author makes the argument that “schools should treat ChatGPT the way they treat calculators[,] allowing it for some assignments, but not others” the author assumes the computing power of arithmetic functions in teaching is comparable to a seemingly endless algorithm compiling large-swaths of data across the internet. A calculator is able to be controlled in a classroom because its limits are realized whether or not it helps the student, but in the non-mathematical lens that ChatGPT adopts, its scope widens to immeasurable extents.
Learning is not merely a process of input vs. output that a calculator will grant you, it is not simply a process of absorbing information and moving on with your day. Rather, learning is a biological and social process with numerous steps taken along the way that may very much differ from the person sitting next to you in class; A person does not process the exact same information in the same way that another person does. This implicit perspective which runs through this article about how learning happens reminds me of how tech-enthusiasts like the now-infamous Sam Bankman-Fried, in an exchange with writer Adam Fisher, see reading: “[the book] should have been a six-paragraph blog post.”
The idea that learning and reading are merely absorption of facts that can be reduced to information and one-two paragraph postings (similar to how ChatGPT operates) devoids an educator of their agency in compelling students to grapple with complex ideas or processes that may not make sense for a period of time. I would argue the underlying philosophy that proponents of adopting ChatGPT in schools adhere to sees education merely as “adduction,” leading someone towards the right answer or to some goal, rather than seeing education how it has been viewed for centuries (the idea of which lies in its etymology): being lead out of ignorance or reliance.