Technology
Hallucination
Hallucination is when an LLM confidently outputs false information — fabricated citations, made-up statistics, invented quotes. The defining failure mode of LLMs and the primary risk in production deployment.
More detail
Mitigations: (1) RAG so the LLM grounds answers in your data, (2) ask for confidence scores + abstention options, (3) human-in-the-loop on customer-facing outputs, (4) structured outputs with verifiable fields, (5) instruct the model to say 'I don't know' explicitly. Aiprosol's workflows never let AI ship a customer-facing email or filing without human review precisely because hallucination risk is non-zero.
