What happens to critical thinking when we outsource the very process that cultivates it?
AI is a slippery slope. Over the past half-decade, Generative Artificial Intelligence (GenAI) began as a novelty chatbot that could mimic conversation or tidy up an email but has quickly evolved into having a normalized presence on assignments at every level of academia. The technology’s rapid development and accessibility have reshaped student work habits, making it tempting for students to rely on the tool for process tasks such as brainstorming and articulation as opposed to heightening our mental capabilities.
It is undeniably more time-efficient given how intuitive and convenient these systems have become, but beneath this convenience lies a pressing question: what happens to critical thinking when we outsource the very process that cultivates it?
A KPMG report has been circulating widely across news outlets and social media platforms, revolving around students and their AI habits.
According to the study, more than 73 per cent of Canadian students now rely on GenAI for their schoolwork, up from 59 per cent only a year ago. Within this group of AI users, 48 per cent admit their critical thinking skills have deteriorated since they began using it.
Now, critical thinking is not just a vague academic buzzword. Critical thinking is a disciplined, and individually identifiable trait that also must be exercised to maximize its potential. When students offload these processes of interpreting sources and articulating their position in written assignments, they forego the struggle that strengthens the reasoning skill that higher education is designed to cultivate.
This AI outsourcing could be seen as students simply streamlining tasks that are tedious and unnecessarily time-consuming. However, education professionals at all levels liken the process of sifting through readings and carving out essay outlines to “lifting weights–but for your brain.”
The mainstream discussion of “do you use AI to write your papers” is also often reflected in a false binary; where one group are staunch opposers of AI and do their work as if it never existed, and the other group uploads all their notes and rubrics to a chat bot and submits whatever AI pumped out.
But between these two groups, are plenty of students that weave the technology in and out of their work. They use it to brainstorm ideas and interpret course readings, while they polish the finished product off themselves to circumvent any detection software.
This relationship with AI seems harmless, but it chips away at the micro-skills that make up the foundation of critical thinking, leaving students with polished assignments but weaker intellectual muscles.
Despite this casual erosion of thinking skills, it would be a mistake to solely cast AI as a threat to intelligence. Technology is value-laden, and its impact depends on how a society chooses to use it.
When we deploy GenAI as a sparring partner used for generating counterpoints and challenging assumptions, the tool could instead become an accelerator in building one’s intellectual ability.
The problem arises when AI performs as a surrogate thinker and starts to absorb the role of the human in completing tasks. In these cases, the user may never end up engaging in the messy, frustrating process that is transforming raw information into articulate, ordered content that makes up the process of critical thinking.
Graphic/Vlad Latis







