Definition

AI Bias

AI bias refers to systematic errors in AI output that reflect and potentially amplify existing human biases present in training data, model design, or the intended use context.

AI bias manifests in multiple ways. Representation bias occurs when training data over- or under-represents certain groups, leading to disparate accuracy. Measurement bias occurs when the metrics used to evaluate a model favor certain outcomes. Historical bias occurs when training data encodes historical inequities that the model then perpetuates.

Unlike hallucination, AI bias doesn't mean the model is "wrong" — it may be accurately reflecting patterns in its training data. The problem is when those patterns are inequitable or harmful.

Understanding AI bias is essential to AI literacy. When using AI for decisions that affect people (hiring, lending, medical triage), independent evaluation of potential bias is essential. Never assume AI outputs are "objective" simply because they're algorithmic.