ChatGPT’s emergence in academia poses a threat 


Robot hand writing on paper
Photo by: Haley Richards

ChatGPT, an AI-powered language model, was released for public use late last year.  

Developed by OpenAI, an artificial intelligence research company, ChatGPT allows its users to engage in a humanlike conversation with the chatbot to assist with various tasks.  

The arrival of this technology has undoubtedly raised some questions regarding ChatGPT’s presence across different fields. 

In academia, an interesting conversation has surfaced on ChatGPT’s relationship with students, professors and learning as a whole.  

“Eventually we’re going to reach a stage where we learn how to live with it. Everybody adjusts expectations and in a couple of years I think these conversations about AI will be very different and there’ll be a lot more people using it,” Mark Humphries, History Professor at Wilfrid Laurier University, said. 

“I think this is really important to get across to students.  They should pay attention to what individual professors say because every professor is going to say something different,” mentioned Humphries.  

This semester, some courses have adopted a generative AI policy in their syllabi.  

Students are informed that misconducted use of language processing models will have consequences.  

 “Professors have been suggested by the university that we include a statement on our syllabus telling students what you can and can’t do with generative AI in a classroom and be as clear as you possibly can,” Humphries said. 

“This is the first semester where people are planning for ChatGPT. You’ll see some people who want to go back to pencil and paper exams because that’ll help, because you’ll know students are writing it in person. Other people might not do that. It’s a tough time for both students and faculty because there isn’t an overarching policy or even an idea of what to do, it’s just kind of a hodgepodge response.”  

Restrictions as to what the chatbot can do is another factor that plays into its role in an academic setting.  

“I think a lot of people are really worried about can [professors] detect [AI use], but the reality is it’s more of a problem of can AI produce an assignment that meets requirements? And the answer is right now, in a lot of cases, no,” said Humphries.  

“I would say that it’s less about detecting AI than it is about the fact that AI, in most cases can’t do the things we’re asking it to do in a lot of cases.”Humpries continued.  

“[ChatGPT] has problems with the accuracy of quotations, for example. I think you’re gonna see a lot of professors really enforcing that this year, that if you’re required to have accurate citations, they’re going to check them more than they probably were in the past. These are symptoms of AI writing, but they’re also just symptoms of bad writing in general.” 

With ChatGPT further developing its place in society and as other generative AI models come on the scene, universities will have to find a place in their institutions for these chatbots to collaborate.  

“[ChatGPT is] very good at analyzing data and coding and things like that.  I’m really interested in those types of uses and I think as we go forward we’re going to see a lot more roll out of using AI in research and we’re going to see a lot more uses for AI in the classroom,” said Humphries.  

Courses focusing on generative AI’s opposing relation to the humanities have been developed.  

In the winter semester, Humphries is teaching ‘ChatGPT Futures: Generative AI and the Digital Humanities’.  

“On the generative AI committee for the university the best advice to people is just be clear. It’s not fair to students if they don’t know [professor’s] expectations and they need to know what to do.” added Humphries. 

Leave a Reply

Serving the Waterloo campus, The Cord seeks to provide students with relevant, up to date stories. We’re always interested in having more volunteer writers, photographers and graphic designers.