Skip to Main Content

University Library, University of Illinois at Urbana-Champaign

Introduction to Generative AI

This library guide is a UIUC campus resource to read and reference for instructional, professional, and personal learning. Updates will occur on a semester basis. Last Updated: March 2024

Prioritize your learning outcomes

An inviting classroom with high ceilings filled with desks, chairs, framed paintings, and decorated in pastel colors, generated by Adobe FireflyDigital tools with AI components like Canvas and Zoom have already shaped our classrooms and our approach to pedagogy. Thinking about generative AI as part of that landscape creates opportunities to critically evaluate how tools can support student learning and to identify moments when there are other tools that are better suited for specific learning outcomes. When used thoughtfully, generative AI can help students engage with your course’s content and support students as they gain key scholarly skills. However, unguided use of AI can undermine the learning process. 

Kristi McDuffie, Dana Kinzy, and Dani Nyikos from the Rhetoric Program recommend the following strategies for revising writing assignments to promote student learning: 

  • Tailor writing assignments to the themes and content of your course
  • Prompt students to cite class discussions and readings
  • Require specific kinds of evidence and detailed analysis
  • Design sequential assignments that build from one to the next
  • Include outlines, proposals, and other prework as part of the assignment
  • Encourage revision based on instructor feedback

For a more extended discussion of pedagogy in the age of AI, check out their "Writing Assignments in the Age of AI" video on MediaSpace. 

Develop an AI course policy

Because each course has unique learning outcomes, AI policies will vary. Without a clear AI policy, students may have difficulty understanding what are acceptable and unacceptable uses of AI in your class. To promote student learning and to prevent frustration, a comprehensive AI policy will: 

  • Encourage the use of AI tools that improve accessibility (i.e. speech-to-text, text-to-speech, image recognition) 
  • Provide examples of permitted uses
  • Explain why certain uses are prohibited

Below are examples of low-risk activities that promote student engagement with your course's material: 

  • Asking a text-based AI tool to ask you questions about your project one at a time until you ask it to stop
  • Using an image-based AI tool to generate a sample image and discuss which features align with course materials
  • Prompting a text-based AI tool to suggest titles for your paper
  • Generating code with text-based AI

Prohibited uses could include high-risk activities that could compromise sensitive data or introduce inaccurate information. You may also choose to limit the use of AI to encourage students to practice specific scholarly, analytical, and creative skills. Here are examples of prohibited uses and their rationale: 

  • Asking AI to suggest relevant articles is prohibited. Some generative AI tools can search the internet, but they can point you to sources that are inaccurate or incomplete. The library catalog or a subject guide is the best place to start your search for reliable information.  
  • Using AI to summarize survey data is prohibited unless that tool meets HIPAA data privacy requirements. 
  • Translating text with AI will undermine your ability to write and speak the language, a key component of this course!

The Library has created a Canvas module that can help you articulate permitted and prohibited uses of AI in your classroom. To add it to your class, log in to Canvas, click the Commons icon in the left hand navigation bar, and search for “AI in This Course.” When you import it into your class, you will have a chance to review and modify the module before sharing it with your students. 

AI Detection 

While it can be tempting to use an AI detection tool to help enforce your course’s AI policy, AI detectors cannot reliably distinguish between AI-generated output and human-authored text. The University of Illinois has disabled Turnitin’s AI detection feature due to its inability to consistently discriminate between AI and original writing. Researchers at Stanford have also shown that AI detectors tend to misidentify text written by English language learners and neurodivergent learners as generated by AI. AI detectors are unreliable because most AI tools are trained to outsmart automated forms of detection as part of the training process. 

Because AI detectors are unreliable and can undermine the inclusivity of your classroom, the authors of this LibGuide unanimously discourage their use. 

Academic Integrity, Plagiarism, and Fabrication

There are two key limitations of generative AI tools that may lead to a violation of the academic code of content. Because GPTs can hallucinate sources, they can inadvertently lead to fabrication, which the Provost’s Office defines as “the falsification or invention of any information, including citations.” When uncited, presenting generative AI outputs as your own ideas can be considered plagiarism by “representing the words, work, or ideas of another as your own.” Both of these are considered violations of the academic code of conduct. The University of Illinois Office of the Provost Students' Quick Reference Guide to Academic Integrity contains a more detailed description of both of these issues. 

  • To help your students avoid plagiarism of AI generated material (as well as other forms of plagiarism) it may be helpful to point them to the Library's guide for citing sources.  
  • If you suspect that a student used AI for an assignment in a way that was not permitted by the course policy, the Toolkit for Addressing AI Plagiarism provides strategies for navigating that situation. 

Additional Resources