Skip to Main Content

Using Generative AI

AI Hallucination

Because generative AI models use probability to generate text, they can give false information. This concept is known as AI hallucination.

Some AI tools are more likely to hallucinate than others. For example, GPT-4 (the model behind ChatGPT Plus and Microsoft Copilot) has improved and is less prone to hallucination. According to OpenAI, it's "40% more likely to produce factual responses than GPT-3.5 on our internal evaluations." 

Models that are grounded in a source of facts are also less likely to hallucinate. But they're not perfect, so you still need to evaluate the output.

Evaluating AI-Generated Content

The following strategies can help you determine the accuracy of AI-generated content.

Compare and verify

Look for other reliable sources to corroborate the AI tool’s claims. Try to find alternative sources that cover the same topic, or even the original context that a claim came from (these are principles F and T in the SIFT method of evaluating information). If an AI tool is grounded, it may provide links to sources that you can use to verify the claims.

Check citations for hallucinations

You can prompt a generative AI tool to cite its sources, but some AI tools, like ChatGPT, are known to create very convincing fake citations. 

They may even create citations that have the names of real researchers who study the topic related to your prompt. However, the article named in the citation might not exist or may not be from the journal it cites. These invented citations are instances of AI hallucination.

Here are some steps you can use to confirm that citations are real:

  1. Enter the title of the publication (e.g., book title or article title) into the U of A Library website or a search engine like Google and search for the item.

  2. If the item does not seem to exist, prompt the AI tool for more details to confirm. For example, “Could you provide an ISBN, ISSN, or DOI for this publication?” These are unique identifiers assigned to publications. 

  3. If the reference does indeed exist, ensure you consult the source material to verify that the information provided by the AI tool was summarized correctly.

Check currency

If you need recent information on a world event or a new development in research, some generative AI tools may not have that information in their datasets. In April 2024, when prompted about how recent its training data is, ChatGPT (GPT-3.5) responded that it had data up to January 2022, and it does not have the ability to access more current information.

Using a grounded generative AI tool can help you generate content related to current information and events.


Some text on this page was adapted from Evaluating Information Sources: Generative AI and ChatGPT with permission of The University of British Columbia Library.

Some text was also adapted from Fact-Checking is Always Needed by University of Arizona Libraries, which is licensed under CC BY 4.0.