KEY LEARNINGS
  • AI hallucinations differ from standard software bugs because the system presents false information with absolute confidence.
  • Large Language Models do not access a database of facts; they generate text based on statistical probability and pattern matching.
  • The 'capability-reliability gap' means models can perform complex tasks like legal drafting while failing at basic factual accuracy.
  • Retrieval-Augmented Generation (RAG) reduces hallucination by grounding the AI's responses in retrieved, trusted documents.
  • Governance requires treating all AI outputs as drafts that must be verified, rather than authoritative sources of truth.
  • Weiser, B. (2023). Here's What Happens When Your Lawyer Uses ChatGPT. The New York Times.
  • Ji, Z., et al. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys.
  • Anthropic. (2024). Claude's Character and Hallucination. Anthropic Documentation.