Generative AI relies on pretrained models to reduce the computing power needed to create text, images, and other outputs. While pre-training makes tools faster and easier to use, it means that some tools do not have access to information in real time. For example, the free version of ChatGPT has studied text authored before September 2021 and has no context for events after that date. Even if a GPT is connected to a search engine, it can only link to sources available on the internet. As any historian will tell you, there are lots of sources that are not available digitally, and the internet is an expansive, but incomplete, representation of all human knowledge and creative works. Searching the library’s catalog offers more robust access to scholarly works.
Likewise, image and audio-based AI reflect the limits of their training sets. Image recognition tools that were trained on pictures from the early 2000s on might misidentify objects in photos from the 1890s.