2026-03-15

What Is AI Hallucination? Why AI Makes Up Facts

AI hallucination occurs when AI models generate confident but false information. Learn what causes it, famous examples, and how to protect yourself.

What Is AI Hallucination?


AI hallucination refers to the phenomenon where large language models (LLMs) like ChatGPT, Claude, or Gemini generate information that is factually incorrect — yet presented with complete confidence. The term "hallucination" comes from the way the AI produces outputs that have no basis in reality, similar to how a human might see things that aren't there.


A 2025 BBC study found that 45% of AI-generated responses contain at least one factual inaccuracy. This isn't a niche bug — it's a fundamental characteristic of how these systems work.


Why Does AI Hallucinate?


Language models predict the next word based on statistical patterns in training data. They don't "know" facts in the way humans do — they approximate what a plausible answer looks like. When a model encounters a question outside its training data, it will often generate a plausible-sounding but fabricated answer rather than saying "I don't know."


Common triggers for hallucination include:

  • Asking about very recent events (past the training cutoff)
  • Requesting specific statistics, citations, or URLs
  • Asking about niche topics with limited training data
  • Framing questions that presuppose a false premise

  • Famous Examples of AI Hallucination


    In 2023, a New York lawyer submitted a legal brief with citations to cases that didn't exist — all generated by ChatGPT. The lawyer faced sanctions for failing to verify the AI's output.


    In another widely reported case, a medical professional used AI to draft a patient summary that included fabricated test results and medications the patient never received.


    These aren't edge cases. They represent a systemic risk when people treat AI output as ground truth.


    How to Protect Yourself


  • Always verify statistics and citations: AI-generated numbers are frequently invented. Cross-reference any specific data point.
  • Ask AI to explain its reasoning: Hallucinations often break down when the model is asked to justify its answer step by step.
  • Use AI tools that cite sources: Platforms like Perplexity AI provide source links. Always check them.
  • Take our [AI Reliance Test](/) to see how susceptible you are to over-trusting AI.

  • Understanding hallucination is the first step to using AI responsibly. The goal isn't to avoid AI — it's to use it with appropriate skepticism.

    Ready to check your own AI trust level?

    Take the Free AI Reliance Test →