Definition

AI Hallucination

AI hallucination is the phenomenon where a large language model generates factually incorrect information while presenting it with apparent confidence. The term draws an analogy to human hallucination — producing perceptions with no basis in reality.

AI hallucination occurs because language models are trained to predict statistically plausible text, not to retrieve verified facts. When a model encounters a question outside its training data, it generates a plausible-sounding response rather than acknowledging ignorance.

Common hallucination patterns include: inventing academic citations, fabricating specific statistics, generating incorrect but plausible historical facts, and producing confident answers to questions with no definitive answer.

A 2025 BBC study found that 45% of AI-generated responses contain at least one factual inaccuracy. Hallucination rates vary by domain — they are highest for specific citations, statistics, and current events.

To protect against hallucination: always verify statistics from primary sources, independently locate any citations AI provides, and use AI tools that display sources.