Skip to Main Content

Student Guide to Artificial Intelligence (AI)

Do I Need to Fact-Check AI Information?

AI's Imperfections: The Challenge of "Hallucination"
Artificial Intelligence (AI) models, like ChatGPT, are powerful tools capable of generating human-quality text. However, they sometimes produce incorrect or misleading information, a phenomenon known as "hallucination." This occurs because these models are probabilistic, meaning they assign probabilities to different word sequences, rather than definitively knowing the correct answer.


The Pitfalls of Source Citation
One common issue with AI models is their tendency to fabricate sources. When asked to provide citations, they may generate seemingly credible references that do not actually exist. This can be particularly problematic for academic research, where accurate sourcing is crucial.


The Evolving Landscape of AI
To address these limitations, researchers and developers are continually working to improve AI models. Newer models, such as Microsoft Copilot and Perplexity AI, leverage internet search results to ground their responses in real-world information. While this can increase accuracy, it is important to note that the quality of the information ultimately depends on the reliability of the underlying sources.