AI and the Color Coded Research

A group of color coded prompt results from ChatGPT to help students develop critical thinking in terms of AI response

Dr. Jessica Cail of Pepperdine University and Seaver College teaches science writing and wanted to demonstrate to students some of the issues and limitations of using AI. So she crafted an AI literature review assignment for her students. It is an excellent example of how to craft an assignment that practically teaches about current issues with the technology and helps reinforce students' critical thinking.

I teach science writing. After a summer of faculty hubbub about the impact generative AI will have on our ability to ensure students are actually *learning* how to write, I decided to work it into my classes' writing pipeline. I'm not burying my head in the sand. I'm not coming off as a Luddite. But I also want to teach students to be critical of anything they read, especially when it comes from AI. For context, students have spent the last couple weeks reading through the literature and selecting articles that they will be using in their literature review. This means, they now know something about the field, its main concepts, and key players. I then had them ask one of the generative AI programs to write a 3-page, APA style, literature review on their topic, and highlight the content and the sources provided in the following way:

  • GREEN: This information is accurate, the source exists, and its findings match what the AI says. I will incorporate this info into my draft. 
  • YELLOW: This information is accurate, the source exists, and its findings match what the AI says, but it is not relevant enough to my paper to include in my draft. 
  • RED: This information is inaccurate or this source doesn't exist.

Aggregate results of 18 student papers are above. While not all students highlighted every line, it was plenty to arrive at the general consensus that while AI might provide one or two accurate pieces of information, the majority of stuff it spit out was OUTRIGHT WRONG. They were shocked and horrified:

"You can't trust AI to get anything right. You have to check everything it spits out."

Good. Mission accomplished. I told them to tell all their friends.

➤ Read the complete article here from Pepperdine IT Annual Review 2023 (pgs 6 & 7)

Image from Jessica Cail's Facebook related to the article.