Would You Let a Machine ‘Think’ For You? 

/

Chatbots could cause humanity’s doom. Not necessarily through violence, but through the slow death of human thought.  

These systems are reshaping how people approach even the simplest decisions. What should I wear today? How do I respond to this email? Could you write an essay on Abraham Lincoln so I can get on with something else? The willingness to hand off judgment is becoming a habit, and the social cost is rising right alongside the convenience. We expect machines to answer not only questions of fact but also questions of preference, morality, and judgment. We want them to fill in the blanks in our own reasoning, and we are thrilled when they oblige. 

Classrooms have become a particularly visible site for this shift. They now echo what feels like secondhand conversation. Students prompt a system, collect the output, then run it through tools that promise to erase the fingerprints of automation.  

Instructors notice the strange uniformity of the resulting essays and try to push back. Their attempts, however, often miss the deeper issue: the problem is not that students are using tools—they always will—but that they are willing to let machines do the thinking. Some instructors ask ChatGPT itself if it wrote the text, which clarifies nothing. Others rely on AI classifiers that guess machine involvement but cannot confirm it. These detectors are shaky enough that researchers now edit their own writing to avoid looking artificial. Imagine that: humans shaping their prose to appear more human. 

It would almost be funny if it did not reveal something more troubling. People are becoming comfortable letting machines carry the mental load and equally comfortable assuming another machine will solve whatever problems arise from it. This problem is not a technical issue, and it cannot be solved with better detection algorithms or stricter rules. It is cultural, psychological, and deeply human.  

The convenience of outsourcing thought tempts us into habits we barely notice, and those habits are reshaping how knowledge, judgment, and learning are practiced. 

It was not always like this. In the 1960s, Joseph Weizenbaum developed ELIZA, and despite its simplicity, it managed to prompt real reflection.  

ELIZA is believed to be one of the world’s first chatbots. It asked questions that pushed the user back into their own thoughts. If the user said they were sad, ELIZA may ask why. If the user complained about something, ELIZA may ask them to elaborate on their feelings. The user had to supply the meaning, direction and interpretation.  

ELIZA never pretended to think in the first place; it would just guide the user through their thoughts and ideas via a series of questions. It was a mirror, not a substitute. The person sitting at the typewriter did the work. 

The contrast between them and now is striking. Early systems left thinking for us. Current ones offer to think in our place, and we have grown surprisingly willing to accept that offer. Over time, the effort to think for us grows smaller, until thinking becomes something we rather watch than do. 

This outsourcing of thought extends beyond classrooms. In workplaces, email drafts, reports, and even computer code are increasingly produced with the assistance of large language models. Decision-making in professional and personal contexts follows the same pattern. We consult the machine, we rely on its suggestions, and we let it propose what should be done next. And yet, despite the apparent intelligence of these systems, they do not know, care, or understand. They merely recombine patterns learned from past data. 

There is a subtle danger in this arrangement. Thinking, reflection, and judgment—these are skills that require practice. If we allow machines to shoulder the work indefinitely, those skills may atrophy. Every question posed to a chatbot is an opportunity lost to wrestle with uncertainty, weigh alternatives and arrive at an original conclusion. 

The lesson is subtle but urgent. Machines can assist, suggest, and inform. But when they replace the work of thinking itself, we are not just outsourcing tasks. We are outsourcing our minds. Slowly, the act of thinking becomes something we rather watch than do. 

We suppose this begs the question, is this humanity’s end goal? Do we truly think of these machines as tools, or is our goal to replace thinking? It’s human nature to replace the skills we evolved with something we deem better. Usually, new technology helps us become more efficient, allowing us to dedicate more of our time elsewhere. We evolved to hunt, and then we developed agriculture, so we did not have to hunt. We evolved impressive strength, and then we developed tools that automate many tasks requiring strength. This is a tale as old as humankind, and it is one likely to continue. So, we will ask again, is our goal to replace our ability to think next? 

Contributed Graphic/Vlast Latis


Serving the Waterloo campus, The Cord seeks to provide students with relevant, up to date stories. We’re always interested in having more volunteer writers, photographers and graphic designers.