Digital tools with AI components like Canvas and Zoom have already shaped our classrooms and our approach to pedagogy. Thinking about generative AI as part of that landscape creates opportunities to critically evaluate how tools can support student learning and to identify moments when there are other tools that are better suited for specific learning outcomes. When used thoughtfully, generative AI can help students engage with your course’s content and support students as they gain key scholarly skills. However, unguided use of AI can undermine the learning process.
Kristi McDuffie, Dana Kinzy, and Dani Nyikos from the Rhetoric Program recommend the following strategies for revising writing assignments to promote student learning:
For a more extended discussion of pedagogy in the age of AI, check out their "Writing Assignments in the Age of AI" video on MediaSpace.
Because each course has unique learning outcomes, AI policies will vary. Without a clear AI policy, students may have difficulty understanding what are acceptable and unacceptable uses of AI in your class. To promote student learning and to prevent frustration, a comprehensive AI policy will:
Below are examples of low-risk activities that promote student engagement with your course's material:
Prohibited uses could include high-risk activities that could compromise sensitive data or introduce inaccurate information. You may also choose to limit the use of AI to encourage students to practice specific scholarly, analytical, and creative skills. Here are examples of prohibited uses and their rationale:
The Library has created a Canvas module that can help you articulate permitted and prohibited uses of AI in your classroom. To add it to your class, log in to Canvas, click the Commons icon in the left hand navigation bar, and search for “AI in This Course.” When you import it into your class, you will have a chance to review and modify the module before sharing it with your students.
While it can be tempting to use an AI detection tool to help enforce your course’s AI policy, AI detectors cannot reliably distinguish between AI-generated output and human-authored text. The University of Illinois has disabled Turnitin’s AI detection feature due to its inability to consistently discriminate between AI and original writing. Researchers at Stanford have also shown that AI detectors tend to misidentify text written by English language learners and neurodivergent learners as generated by AI. AI detectors are unreliable because most AI tools are trained to outsmart automated forms of detection as part of the training process.
Because AI detectors are unreliable and can undermine the inclusivity of your classroom, the authors of this LibGuide unanimously discourage their use.
There are two key limitations of generative AI tools that may lead to a violation of the academic code of content. Because GPTs can hallucinate sources, they can inadvertently lead to fabrication, which the Provost’s Office defines as “the falsification or invention of any information, including citations.” When uncited, presenting generative AI outputs as your own ideas can be considered plagiarism by “representing the words, work, or ideas of another as your own.” Both of these are considered violations of the academic code of conduct. The University of Illinois Office of the Provost Students' Quick Reference Guide to Academic Integrity contains a more detailed description of both of these issues.