Skip to Main Content

University Library, University of Illinois at Urbana-Champaign

Introduction to Generative AI

This library guide is a UIUC campus resource to read and reference for instructional, professional, and personal learning. Updates will occur on a semester basis. Last Updated: March 2024

Hallucinations

AI hallucinations occur when Generative AI tools produce incorrect, misleading, or nonexistent content. Remember that large language models, or LLMs, are trained on massive amounts of data to find patterns; they, in turn, use these patterns to predict words and then generate new content. The fabricated content is presented as though it is factual, which can make AI hallucinations difficult to identify. A common AI hallucination in higher education happens when users prompt text tools like ChatGPT or Gemini (previously Google Bard) to cite references or peer-reviewed sources. These tools scrape data that exists on this topic and create new titles, authors, and content that do not actually exist.

Image-based and sound-based AI is also susceptible to hallucination. Instead of putting together words that shouldn’t be together, generative AI adds pixels in a way that may not reflect the object that it’s trying to depict. This is why image generation tools add fingers to hands. The model can see that fingers have a particular pattern, but the generator does not understand the anatomy of a hand. Similarly, sound-based AI may add audible noise because it first adds pixels to a spectrogram, then takes that visualization and tries to translate it back into a smooth waveform. 

Examples of Hallucinations