AI Chatbot Glossary
What Is Hallucination?
A hallucination is when an AI model generates information that sounds plausible but is factually incorrect — a key risk in generative AI chatbots, mitigated by RAG and strict prompts.
Definition
When an AI model generates information that sounds plausible but is factually incorrect or fabricated. Hallucinations are a known challenge in generative AI chatbots and can be mitigated through techniques like RAG, grounding responses in verified knowledge bases, and setting strict instructions.
Why Hallucination Matters for AI Chatbots
Hallucinations are the single biggest reputational risk of deploying an AI chatbot — a confidently wrong answer about shipping, pricing, or policy costs a customer. The fix is not to "trust the model more" but to force the model to answer only from your real content via RAG, and to hand off to a human when unsure.
Related Terms
RAG (Retrieval-Augmented Generation)
RAG is a technique that enhances AI responses by retrieving relevant information from a knowledge base before generating an answer — reducing hallucinations and grounding replies in real data.
Large Language Model (LLM)
An LLM is an AI model trained on vast amounts of text that can understand and generate human language — LLMs like GPT-5.4, Claude, and Gemini power modern chatbots.
Prompt Engineering
Prompt engineering is the art of crafting instructions that guide AI models to produce desired outputs — defining chatbot personality, knowledge boundaries, and response rules.
Knowledge Base
A knowledge base is a structured collection of information (documents, FAQs, policies) that a chatbot uses to find accurate answers — searched in real time to ground responses.
Try Chatonbo free
Deploy an AI chatbot on your website in under 5 minutes — no credit card required.
Get Started Free