Monday, February 24, 2025

Writing in the Age of Artificial Intelligence: AI and Post-Plagiarism




I teach academic writing to students at the University of Toronto Writing Centre. It’s wonderful and fulfilling work and I enjoy helping students in several disciplines (engineering, health sciences, social sciences) learn to write better. As with the rest of the academic world of writing, we are all making sense of the use of AI-generated tools by students, instructors and researchers: large language models (LLMs) such as ChatGPT, Claude, Copilot, and even Grammarly. Students have told me that they found AI helpful in brainstorming and outlining as well as organizing literature reviews and editing for grammar and such. A recent survey of universities and corporations around the world by the Digital Education Council revealed that a majority of students used AI tools. Of those surveyed, close to two thirds used AI as a search engine; a third used it to summarize documents; and a fifth used it to create first drafts. 

In her 2024 article “The Future Is Hybrid”, Beth McMurtrie suggested that genAI “may eventually take its place in the pantheon of game-changing technologies used every day in education—alongside calculators, search engines, and Excel.”

In my other pursuit as a professional fiction author, I see the artistic and communication industries embracing AI, particularly in the visual arts. I’m now told that several publishing houses and magazines have dedicated efforts to publish AI-generated work. Some magazines are entirely AI generated, Copy Magazine, for instance. Author futurist Bernard Marr writes that “Generative AI is already being adopted in journalism to automate the creation of content, brainstorm ideas for features, create personalized news stories, and produce accompanying video content.” Marr then goes on to provide 13 ways that all writers should embrace Generative AI that includes anything from drafting plot lines to world building. Sports Illustrated was recently found to publish AI generated stories. Even newspapers, such as the LA Times, the Miami Herald, and Us Weekly acknowledge AI-written content. And I recently learned that one of the top five online science fiction magazines, Metastellar, accepts AI-assisted stories with the proviso that “they better be good.” And Metastellar provides some convincing reasons. This has become a hot topic among my fellow professional writers at SF Canada.  One colleague informed me that a “new publisher Spines plans to disrupt industry by publishing 8000 AI books in 2025 alone.” On checking the news release, I discovered that Spines is, in fact, a tech firm trying to make its mark on publishing, primarily through the use of AI. The company offers the use of AI to proofread, produce, publish, and distribute books. They are, in fact, a vanity publishing platform (essentially a service for self-publishing), charging up to $5000 a book and often taking just three weeks to go from manuscript to a published title.

The emerging field of AI-assisted writing and communicating is a burgeoning field that promises to touch every person in some way—writers and readers alike. Tech companies are scrambling to use it to save time and effort. Others are involved in improving current and developing new models. Many are training LLMs for improved use. Even I was headhunted as a creative writer by one tech firm to help create more safe, accurate and reliable LLMs.

Generative AI applications (Image from Neebal Technologies)

Universities and Other Educational Institutions Use of AI

How universities and other educational institutions are dealing with the challenge and promise of these emerging tools in communication varies from out right forbidding AI use in the classroom to full on acceptance and obligatory use in some classroom projects. McMurtrie described how two instructors at Rollins College, Dan Myers and Anne Murdaugh, had students collaborating with AI on semester-long research projects. They were instructed to use Claude and Copilot to brainstorm paper topics, conduct literature reviews, develop a thesis, and outline, draft, and revise their papers. Myers and Murdaugh asserted that “the skills that students use to engage thoughtfully with AI are the same ones that colleges are good at teaching. Namely: knowing how to obtain and use information, thinking critically and analytically, and understanding what and how you’re trying to communicate.”

In fall of 2024, Stephanie Wheeler and others at the department of writing and rhetoric in the University of Central Florida, along with their philosophy department, set up an interdisciplinary certificate in AI. Their purpose was to develop conceptual knowledge about AI. Wheeler asserted that writing and rhetoric have long been concerned with how technology shapes these disciplines. Sharon L.R. Kardia, senior associate dean of education at the University of Michigan argued that AI could greatly benefit public health in its ability to aid in data analysis, research review, and the development of public-health campaigns. However, she cautioned that LLMs also absorb and reflect the social biases that lead to public-health inequities.

One of my Writing Centre colleagues at UofT recently shared some thoughts about a conference session he’d participated in, in which a student panel listed tasks that they thought genAI cannot do (yet). These included: generate music, offer interpersonal advice, and verify facts; I think AI can already help with two of these. Chad Hershock, executive director of the Eberly Centre for Teaching Excellence and Educational Innovation at Carnegie Mellon University shared that they are researching key questions about whether AI enables or impedes: does using AI while brainstorming generate more or fewer ideas? Can generative AI give less-experienced students a better chance to be successful in technical courses? To what extent does using AI help or hinder writing skills? Does having generative AI as a thought partner enhance students’ ability to make a claim and support it with evidence?

My own experience with a less-experienced student’s use of genAI was often abysmal. The student had used the tool as a crutch and had failed to learn from their use of the tool. This suggests that the most important limitations of the tool lie with the user’s own limitations and it points to the need for guidance by educators.

In her 2024 Axios article “Why AI is not substitute for human teachers” Megan Morrone described findings of the Wharton School on access to genAI: while genAI tutors improved student performance on practice math problems, students who used these tools performed significantly worse on exams (where they couldn’t use AI). The school concluded that the students used genAI to copy and paste answers, which led them to engage less with the material. Wharton School associate professor Hamsa Bastani argued that, “if you just give unrestricted access to generative AI, students end up using it as a crutch…[and] end up performing a lot worse.” This is partly because students—often stressed-out by heavy work loads—find that LLMs save time and can produce content close to what the user might produce themselves. Researchers have even come up with a term for this: Cognitive Miserliness of the User, which, according to writer Stephen Marche, “basically refers to people who just don’t want to take the time to think.”

Melanie M. Cooper, chemistry professor at Michigan State University cautioned that while “there’s a lot of ebullience in the AI field, it’s important to be wary.” She argued that it is easy  to misuse AI and override the system to get a quick answer or use it as a crutch.  McMurtrie shares that, while “AI evangelists promise that these tools will make learning easier, faster and more fun,” academics are quick to reject that rhetoric. McMurtrie ends her article with a cautionary statement by Jennifer Frederick of Yale: “Universities really need to be a counterpoint to the big tech companies and their development of AI. We need to be the ones who slow down and really think through all the implications for society and humanity and ethics.”

Considering the impact of artificial intelligence on writing, Dr. Sarah Elaine Eaton, professor at the University of Calgary, introduced the idea of life in a postplagiarism world. She expanded on her ideas to come up with six tenets that characterize the post-plagiarism age. These include:

  1. Hybrid human-AI writing will become normal
  2. Human creativity is enhanced
  3. Language Barriers disappear
  4. Humans can relinquish control, but not responsibility
  5. Attribution remains important
  6. Historical definitions of plagiarism no longer apply
6 tenets of Postplagiarism (image from Sarah Elaine Eaton)

Eaton’s fifth point (attribution remains important), I think becomes all that more important in the presence of AI use. Transparency in presentation, particularly in an academic setting, takes on a new level of importance when communicating with tools such as generative AI. Where things come from, which tool was used and how it was used are key to understanding and interpreting the nature of the writing itself. The path taken to the destination becomes all important when interpretation and comprehension (and replication) is required. To fully understand “where you are”; we need to know “how you got there.” It’s like solving math problem; if you don’t show your work and just provide the answer, I have no way of knowing that you actually understood the problem and really solved it.

I am certain that generative AI will continue to take on various forms that will continue to astonish. Its proper use and development will serve humanity and the planet well; but there will always be abusers and misusers and those who simply don’t care. We must be mindful of them all. We must remain vigilant and responsible. Because, just as with freedom, if we grow lazy and careless, we run the risk of losing so much more.    

Generative AI (image from techvify-software.com)

References:

Eaton, Sarah Elaine. 2021. “Plagiarism in Higher Education: Tackling tough Topics in Academic Integrity.” Bloomsbury Publishing, 252pp.

Marr, Bernard. 2024. “13 Ways Writers Should Embrace Generative AI.” Bernard Marr & Co. February 5, 2024.

Marche, Stephen. 2024. “AI Is a Language Microwave.” The Atlantic. September 27, 2024.

McMurtrie, Beth. 2024. “The Future Is Hybrid.” The Chronicle of Higher Education. October 3, 2024.

Morrone, Megan. 2024. “Why AI is no substitute for human teachers.” Axios, August 15, 2024.

Niloy, Ahnaf Chowdhury, et al. 2024. “Why do students use ChatGPT? Answering through a triangulation approach.” Computers and Education: Artificial Intelligence 6.

Nina Munteanu is a Canadian ecologist / limnologist and novelist. She is co-editor of Europa SF and currently teaches writing courses at George Brown College and the University of Toronto. Visit www.ninamunteanu.ca for the latest on her books. Nina’s bilingual “La natura dell’acqua / The Way of Water” was published by Mincione Edizioni in Rome. Her non-fiction book “Water Is…” by Pixl Press (Vancouver) was selected by Margaret Atwood in the New York Times ‘Year in Reading’ and was chosen as the 2017 Summer Read by Water Canada. Her novel “A Diary in the Age of Water” was released by Inanna Publications (Toronto) in June 2020.