Hallucinations, you may have heard this term somewhere in movies, news, and from anyone's mouth. If you don't know what this is, it is a sensory experience that involves hearing, smelling, feeling, tasting, and seeing things that appear to be real but do not exist in the real world. Sounds like a mental issue, right? But for a while, just think, can computers, can hallucination happen with them? Sounds unreal, but it is true. On the one hand, people are extensively using AI models for their day-to-day tasks, but on the other hand, there are many concerns arising regarding the authenticity of the information because of the generation of misleading results and false information, eventually leading to incorrect decision-making.
Companies like OpenAI, Google, Microsoft, Meta, and DeepSeek have introduced new AI models that are so powerful and consist of reasoning systems, and are now working hard to cure the disease of AI hallucinations because their AI Chatbots are generating errors upon high and intense training and database access. Amr Awadallah, Chief Executive of Vectara, a start-up that builds AI tools for businesses, said, "Despite our best efforts, they will always hallucinate," he added again, "That will never go away."
Types of AI Hallucinations
Let's talk about the types of AI hallucinations; there can be different types of AI hallucinations that you can identify.
- Sentence Contradiction: This type of AI Hallucination can be seen when an LLM results in a sentence that would be a contradiction of its own sentence generated previously. For example, if you prompt, "Write a description of a landscape in four-word sentences.", it may result in "The grass was green. The mountains were blue. The river was purple. The grass was brown."
- Prompt Contradiction: This type of AI Hallucination occurs when the result is contradictory to the actual Prompt given to the Chatbot. Here's an example: Suppose you prompted "Write a birthday card for my niece," and the Chatbot responded with this: "Happy anniversary, Mom and Dad!"
- Factual Contradiction: This type can occur when the AI chatbot is presenting fictitious information as factual data. You can understand this by the example, if you are prompted, "Name three cities in the United States," it may interpret this as hallucinating other cities which are not in the US. The output can be "New York, London, Toronto."
- Irrelevant or random hallucinations: The next type of AI hallucination can be found when the AI model generates random responses that have no or minimal relation to the input. For example, Prompt: "Describe London to me", and output, "London is a city in England. Cats need to be fed at least once a day."
Reasons behind AI Hallucinations
You may wonder why AI are hallucinating and what could be the reasons behind that. Well, here are some of the possible reasons why these so powerful AI systems are hallucinating more often.
- Poor Data Quality: The most common thing that people interpret behind any hallucinations is the poor quality of data. LLMs are trained on a large dataset, but these datasets can contain errors, noise, biases, and other inconsistencies, eventually making AI produce incorrect responses.
- Generation Method: The generation and training method that the AI tool uses can be biased or false decoding, which can eventually lead to fabricated responses.
- Input Context: Sometimes, it could be because of false input or an erroneous prompt that a user gives. Most uses provide unclear and inconsistent inputs, triggering hallucinations in the AI models and resulting in false or misleading results.
Some examples of AI hallucinations
- In February 2023, Gemini, which is Google's chatbot, generated an incorrect claim about the James Webb Space Telescope (JWST) when a user prompted Gemini, "What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" It replies that JWST took the very first picture of an Exoplanet outside Earth's Solar System. The information was completely false because the real answer to that is the European Southern Observatory's Very Large Telescope (VLT), which took such a photo in 2004.
- In late 2022, Meta's open-source LLM, Galactica, did the same thing; it generated inaccurate and biased results. Galactic was trained on millions of pieces of scientific information from textbooks, papers, encyclopedias, websites, and even lecture notes, but failed to summarize a New York University professor's research work. People criticise Galactica a lot on this and raised a serious question about the implications of AI models used by anyone who doesn't have much information about any topic, as they rely on the chatbot for support.
- Another example is ChatGPT, one of the most famous chatbots in the world. It also shows symptoms of AI hallucinations in June 2023. Because of AI hallucinations, it makes malicious and potentially libelous statements about a Georgia-based radio host. In February 2024, it again showed symptoms where it tends to switch languages and showed looped results.
Detect any AI Hallucinations
Knowing about AI hallucinations is not enough; a user must understand how to detect whether the AI is hallucinating or not. See, AIs are LLMs that have been trained to answer all your prompts, depending on evaluating their complexity and generating the outcomes by pattern recognition. Here, fact-checking can be the most basic way to detect. You can ask the model to self-check its answers to generate the probability of the correctness of the answers, or simply highlight the parts of it.
Preventing AI Hallucinations
Since you now understand the reasons for and how to detect AI hallucination, is there any way to prevent such things while interacting with AI? Well, yes, there are a number of things which you can do to minimise the occurrence of hallucinations in AI, and the easiest ones are given below.
- Use clear and specific prompts: Guide the model by using clear instructions and add additional context consisting of specific numbers, relevant data sources, etc.
- Filtering and ranking strategies: Large language models have certain parameters that users can adjust according to their needs. Changing these parameters, one can easily control the randomness of the output. Filtering the things helps minimise such randomness and helps AI to minimise hallucinations.
- Multishot prompting: Providing several examples of the desired results that a user wants. It helps the model to analyse more accurately and generate results with accuracy.
Conclusion:
AI hallucination has been one of the leading issues that users are facing these days, and companies that are serving AI tools are now researching a practical cure by training their LLMs. Hopefully, this article helped you gain knowledge about AI Hallucinations and will be attentive towards detecting and preventing them.