Generative AI uses complex algorithms to produce highly realistic content, which can be difficult to distinguish from content created by a human. Because of this, critical thinking skills are essential to evaluate the authenticity and accuracy of AI-generated content.
The same principles for evaluating information sources apply to generative AI. Tests such as SIFT can be helpful in determining if the generated information is reliable. However, some of the questions we typically ask ourselves about sources may be more difficult to answer when consulting generative AI, because the process it takes to arrive at answers is not public.
So how can you assess the information generative AI gives you?
Look for other reliable sources to corroborate the AI’s claims. Try to find alternative sources that cover the same topic, or even the original context that a claim came from (these are principles F and T of SIFT).
You can ask a generative AI tool to cite its sources, but it is known to create very convincing fake citations.
It can even create citations that have the names of real researchers who study the topic related to your prompt. However, the article named in the citation might not exist or may not be from the journal it cites. These invented citations are referred to as “hallucinations.”
Here are the steps you may use to confirm that the citations are real.
Enter the title of the publication (e.g., book title, article title) into the U of A Library website or a search engine like Google and search for the item.
If the item does not seem to exist, prompt the AI tool for more details to confirm. E.g. “Could you provide an ISBN, ISSN, or DOI for this publication?” These are unique identifiers assigned to publications.
If the reference does indeed exist, ensure you consult the source material to verify that the information provided by the AI tool was summarized correctly.
The date when a document was created, edited, updated, or revised is an important factor in evaluating any information source. If you need recent information on a world event or a new development in research, generative AI may not have that information in its dataset. In November 2023, when prompted about how recent its training data is, ChatGPT (GPT-3.5) responded that it had data up to January 2022, and it does not have the ability to access more current information.