March 31, 2025

AI hallucinations: the real reasons explained (in 2025)

Discover what causes AI hallucinations—a phenomenon where AI fabricates facts as real—and how to spot and prevent them for more accurate, trustworthy work.
March 31, 2025

AI hallucinations: the real reasons explained (in 2025)

Discover what causes AI hallucinations—a phenomenon where AI fabricates facts as real—and how to spot and prevent them for more accurate, trustworthy work.
March 31, 2025
Briana Brownell
In this article
TL;DR: Tips for spotting and preventing AI hallucinations
Real-world impacts of AI hallucinations
What are AI hallucinations?
Advanced technical methods to prevent AI hallucinations
The two types of AI hallucinations explained
Why do AI hallucinations happen? The technical explanation
How to identify and prevent AI hallucinations
Understand AI training data limitations
Use clear prompting
Avoid requests requiring holistic thinking
Identify plausible-sounding AI nonsense
Question suspiciously perfect AI responses
Verify AI-generated summaries
Cross-check key facts
Verify each step in AI reasoning processes
The future of AI hallucinations
FAQs
What can cause AI hallucinations?
How can I mitigate AI hallucinations for crucial tasks like healthcare or legal advice?
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions

Two lawyers walked into a courtroom with a legal brief written by ChatGPT, complete with six court cases supporting their argument. Plot twist: none of the cases ChatGPT gave as examples were real.

This wasn't just an awkward moment—it was a stark reminder of AI's most persistent quirk: hallucinations. Even the most advanced AI tools can confidently make things up, presenting fiction as fact with unwavering certainty. And if you're not careful, these fabrications can slip right into your work, potentially undermining everything you create.

TL;DR: Tips for spotting and preventing AI hallucinations

  • Mind the training set's limitations. Asking about recent events or niche topics often leads to hallucinations.
  • Use clear prompting. Well-formed prompts go a long way.
  • Be careful about asking it to think holistically. LLMs can't look backwards, so they can't think about earlier answers.
  • Watch for "plausible-sounding nonsense." LLMs are very good at sounding convincing, even if it's nonsense.
  • If it's "too good to be true," it probably is. LLMs tend to err on the side of pleasing the user, even if the answer is wrong.
  • Be careful with summaries. Always fact check them using the source material.
  • Cross-check key facts. Precise figures, names, and dates are often fabricated—always double check them.
  • Be cautious with multi-step reasoning. Every step an LLM takes introduces a new chance for error.

But what are hallucinations, exactly, and why do they happen? And how can regular users of AI tools prevent them?

Real-world impacts of AI hallucinations

AI hallucinations can create tangible economic harm, particularly in finance, where inaccurate predictions can mislead investors, resulting in significant financial losses. For example, erroneous financial reporting can destabilize markets, placing both individual and institutional investors at risk [Research on financial impacts]. In the legal sector, incorrect AI-generated contract details or case references can lead to costly disputes and regulatory penalties [AI-induced legal risk study]. Healthcare systems are also susceptible, as faulty AI-driven diagnoses can harm patient outcomes by leading to ineffective treatments [Healthcare caution]. Ultimately, these errors erode trust in AI, prompting more scrutiny for compliance and oversight. By measuring both short-term and long-term damages, organizations can develop more targeted mitigation strategies that preserve accuracy and public confidence.

What are AI hallucinations?

Hallucinations are instances where an AI model generates information that's either factually incorrect or fails to follow your instructions. These AI fabrications can range from minor inaccuracies to completely invented facts presented as truth.

Why do hallucinations happen? Well, most articles on the topic will give you this short answer: LLMs are built to be prediction machines; they're trying to predict what should come next. They might be grammar whizzes, but the inherent randomness baked into them means they sometimes string together plausible-sounding nonsense. According to recent studies, even the most advanced AI models still have hallucination rates of approximately 3-5%, showing this remains a persistent challenge.

That's fine for a TL;DR, but the long answer is much more informative. If you understand it, you'll be better able to spot places where hallucinations are most likely to crop up—and make sure your AI-assisted work doesn't make international news.

Advanced technical methods to prevent AI hallucinations

Strategies like retrieval-augmented generation (RAG) combine text generation with external knowledge bases, ensuring outputs stay grounded in factual data [RAG approach]. Data governance is equally vital, as implementing structured protocols around data sourcing, cleaning, and relevance can limit the emergence of fabricated information [Value of data governance]. These approaches address the root causes of hallucinations by minimizing bias and improving the model’s overall reliability. Additional steps, such as using reflection techniques and prompt engineering, have shown measurable success in managing complex queries without generating false responses [Prompt engineering outcomes]. Organizations adopting these methods see fewer hallucinations and greater user confidence, particularly in high-stakes industries like healthcare and law. As new AI research continues to emerge, experts project ongoing refinements that will further reduce hallucination rates.

The two types of AI hallucinations explained

It's tempting to think of hallucinations as just incorrect "facts," but researchers have identified two main categories of hallucination problems: factuality and faithfulness.

Factuality issues are, as the name suggests, all about getting the facts wrong. This can happen in a couple of ways. These might be factual inconsistencies, where the LLM provides information that contradicts verifiable facts. For example, the LLM might say that the first person to land on the moon was Yuri Gagarin when it was Neil Armstrong. Similarly, a factual fabrication is where the LLM will generate a plausible output that is ungrounded in facts. For example if asked to summarize a fictional scientific article or made-up historical event, or even something impossible like the origin of unicorns, it will generate plausible-sounding fictions. A real-world example occurred in 2023 when a lawyer used ChatGPT for research and inadvertently cited non-existent legal cases in court documents.

A faithfulness hallucination is where the LLM doesn't carry out your instruction correctly. The most obvious type of faithfulness hallucination is instruction inconsistency. This is when the LLM's response ignores your prompt completely. For example, the prompt "Translate the following English question into Spanish: 'What is the capital of France?'” might cause the LLM to answer "The capital of France is Paris." rather than doing the translation.

It might seem like faithfulness hallucinations are obvious to detect, but they can introduce subtle errors, too. For instance, the LLM might follow an instruction, but disregard the information you provide and add its own, resulting in an error in the response. These context inconsistency hallucinations are especially dangerous when doing summarization—it might add details that are in its knowledge bank and not in the actual document you're trying to summarize. Not great for citing your sources.

The last type of faithfulness hallucinations crop up when the LLM is trying to solve logical problems: logical inconsistency hallucinations. They happen when the LLM is reasoning, but makes an error at one step in the chain, then carries the error through, resulting in an incorrect response.

Sometimes the LLM can even give you a mix of factual and faithfulness hallucinations. For example, this response to a prompt for “books with really long words in their titles” has some real titles that have only moderately long words in them, made-up books, and books with long titles.

Why do AI hallucinations happen? The technical explanation

Now that you know what kinds of hallucinations can happen, you can find out how they relate to the training and architecture of LLMs—their so-called capability acquisition process.

LLMs typically go through three stages of training, each of which can introduce quirks that lead to hallucinations.

The first stage is pretraining, where the model learns to predict the next token (word or part of a word) in a sequence. This depends on data, and lots of it. Flaws in the depth or breadth of the dataset can mean its ability to address certain topics is limited or unstable, leading to plausible fictions. You might have knocked up against these flaws already. All models have training cut-offs already, where the AI's knowledge is limited to before a certain date. This can cause it to misreport recent developments.

Data quality issues and biases can cause incorrect information to be encoded in the LLM at this stage too. For instance, the idea that Thomas Edison invented the light bulb is incorrect, but commonly believed (and repeated). Older LLMs might repeat this factual error when prompted.

Plus, we actually know exactly how LLMs store factual knowledge. Studies suggest that the LLMs don't understand factuality at all—instead, they rely on the structure of the training data. That means, unfortunately, that they might not even use the knowledge they have effectively to answer prompts.

Another issue comes from how LLMs generate responses. The way AI processes and generates text token by token in a sequence means that LLMs don't "understand" the content as a whole. Instead, they predict the next token based on patterns seen during training. They only look forward in the sequence, which can lead to errors, especially in tasks with specific constraints, like requirements on words or letters, or analysis based on them. If you've noticed how hard it is for LLMs to stick to a word count, that's why.

Pretraining optimizes the model for text completion, where it continues generating text from an initial seed. But LLMs are now more often used in input-response scenarios, where they're asked to complete tasks or answer questions directly. This is where the last two stages, Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), come into play. These stages help the AI adapt to a back-and-forth response format and generate human-preferred responses.

But SFT doesn't cover every possible prompt, especially poorly formed ones. That boosts the likelihood of hallucinations with unusual inputs. RLHF can cause models to develop sycophancy—a tendency to generate responses that please the user rather than those that are truthful. This is because both humans and preference models often like agreeable answers more than accurate ones.

Each of these stages introduces different kinds of hallucinations, and now that you understand the training process, you'll get better at spotting them when they happen.

How to identify and prevent AI hallucinations

You might be wondering, "Can't we just create a hallucination detector?" If only it were that simple! Unfortunately, there's no foolproof way to automatically catch AI hallucinations. The most reliable method is still good old-fashioned fact-checking against trustworthy external sources—including that supercomputer between your ears. This challenge has significant implications for AI trust and governance, as organizations must implement robust verification systems when deploying AI tools.

I know, I know—AI is supposed to make things faster and easier, and fact-checking everything you get from an LLM isn't exactly that. But don't lose hope! While we can't eliminate hallucinations entirely (experts predict they'll remain a challenge through at least 2025), we can get better at spotting them. Two key strategies are: 1) recognizing situations where an LLM is likely to hallucinate, and 2) using specific prompting techniques to reduce them.

Here are my top tips for staying vigilant against hallucinations:

Understand AI training data limitations

Remember, the LLM doesn't know everything, particularly when it comes to recent developments or niche topics, so when asking for information that's unusual or highly specific—like detailed technical data—be on high alert for hallucinations. If your query involves something that might not be well-represented in the AI's training set, or if it's about events after the model's training cutoff date, there's a higher chance of the AI fabricating information. LLMs can also reflect biases present in their training data, so be cautious of responses that seem to align too closely with common stereotypes or misconceptions.

Use clear prompting

Good prompting doesn't need fancy techniques, but it does need precision. Clear, well-formed prompts help minimize hallucinations. It's tempting to fire off a quick thought, but taking an extra moment to craft your prompt can save you from hallucination headaches later. Using data templates, limiting response length, and providing explicit constraints can all help reduce hallucinations. That said, if you are working in a hallucination-prone area, it might be worthwhile to brush up on advanced prompts for accuracy.

Avoid requests requiring holistic thinking

Many hallucinations stem from the way AI processes and generates language. It operates on tokens, predicting the next one based on what came before. This forward-looking, token-by-token approach can lead to errors because the AI lacks a holistic understanding of the text. Be especially wary of responses that would normally require the AI to "look back" at what it's already generated—it can't do that, so these responses are more prone to hallucinations.

Identify plausible-sounding AI nonsense

I teach a university AI course and this is where my students often get tricked. The model's training to predict the next most likely token can sometimes result in text that sounds convincing but doesn't actually make sense upon closer inspection. So make sure you read everything the AI generates closely!

Question suspiciously perfect AI responses

Thanks to their RLHF training, LLMs can be real people-pleasers. If an answer seems suspiciously perfect or exactly what you wanted to hear, it might be the AI's "eager to please" behavior kicking in. These sycophantic responses prioritize making you happy over being accurate, so give them an extra dose of skepticism.

Verify AI-generated summaries

Summarization is one of the most helpful features of LLMs and it's a drag that the only real answer about how to check whether the summary is accurate is to read the probably lengthy document it comes from. Anytime you're asking for a summary, be extra vigilant about double checking the information with the original source.

Cross-check key facts

Always verify crucial information provided by the AI. This is especially important for names, dates, statistics, and other specific data points. Unless explicitly provided in the prompt, very precise numbers or statistics are often hallucinated, as the pretraining process doesn't focus on memorizing exact figures. And, to avoid the same issue as our lawyer friend above, if the AI mentions a specific source, take the time to look it up to ensure it exists.

Verify each step in AI reasoning processes

The pretraining process doesn't explicitly teach logical reasoning. When an LLM performs multi-step reasoning, each step introduces a chance for error. In these cases, you might benefit from using another tool—like asking the LLM to write code rather than answer in text. Make sure to verify each step independently.

The future of AI hallucinations

Hallucinations are, unfortunately, one of the biggest issues with working with AI. Although we're seeing pretty good progress in minimizing them through techniques like retrieval-augmented generation (RAG) and improved training methods, if you work with AI tools a lot, you're inevitably going to find some cropping up. If you're still not satisfied with your hallucinations, check out my tips on how to reduce hallucinations.

FAQs

What can cause AI hallucinations?

AI hallucinations often arise from limitations in training data or the model’s architecture, which can lead to fabricated outputs [Analysis of hallucinations]. They can also occur when an AI attempts to please the user with agreeable answers instead of focusing on factual accuracy [Human feedback study]. Additionally, poor prompt engineering or queries about niche topics can increase the likelihood of fabricated information. Strict data governance and advanced techniques like retrieval-augmented generation help mitigate these issues. Ongoing research aims to reduce hallucination rates through continual model refinement and more robust training methodologies.

How can I mitigate AI hallucinations for crucial tasks like healthcare or legal advice?

In high-stakes fields, best practices include implementing strict data governance to ensure reliable input sources [Data governance benefits]. Techniques like retrieval-augmented generation enhance factual grounding by cross-checking external databases. Conducting thorough fact-checking and using specialized models or domain experts in the loop further reduces errors. Regular audits and continuous monitoring also help capture emergent issues early. By adopting these strategies, professionals can significantly lower the risk of harmful AI hallucinations.

Briana Brownell
Briana Brownell is a Canadian data scientist and multidisciplinary creator who writes about the intersection of technology and creativity.
Share this article
Start creating—for free
Sign up
Join millions of others creating with Descript

AI hallucinations: the real reasons explained (in 2025)

Robotic hand holding a brain and laptop, illustrating AI hallucinations and fabricated responses.

Two lawyers walked into a courtroom with a legal brief written by ChatGPT, complete with six court cases supporting their argument. Plot twist: none of the cases ChatGPT gave as examples were real.

This wasn't just an awkward moment—it was a stark reminder of AI's most persistent quirk: hallucinations. Even the most advanced AI tools can confidently make things up, presenting fiction as fact with unwavering certainty. And if you're not careful, these fabrications can slip right into your work, potentially undermining everything you create.

TL;DR: Tips for spotting and preventing AI hallucinations

  • Mind the training set's limitations. Asking about recent events or niche topics often leads to hallucinations.
  • Use clear prompting. Well-formed prompts go a long way.
  • Be careful about asking it to think holistically. LLMs can't look backwards, so they can't think about earlier answers.
  • Watch for "plausible-sounding nonsense." LLMs are very good at sounding convincing, even if it's nonsense.
  • If it's "too good to be true," it probably is. LLMs tend to err on the side of pleasing the user, even if the answer is wrong.
  • Be careful with summaries. Always fact check them using the source material.
  • Cross-check key facts. Precise figures, names, and dates are often fabricated—always double check them.
  • Be cautious with multi-step reasoning. Every step an LLM takes introduces a new chance for error.

But what are hallucinations, exactly, and why do they happen? And how can regular users of AI tools prevent them?

Real-world impacts of AI hallucinations

AI hallucinations can create tangible economic harm, particularly in finance, where inaccurate predictions can mislead investors, resulting in significant financial losses. For example, erroneous financial reporting can destabilize markets, placing both individual and institutional investors at risk [Research on financial impacts]. In the legal sector, incorrect AI-generated contract details or case references can lead to costly disputes and regulatory penalties [AI-induced legal risk study]. Healthcare systems are also susceptible, as faulty AI-driven diagnoses can harm patient outcomes by leading to ineffective treatments [Healthcare caution]. Ultimately, these errors erode trust in AI, prompting more scrutiny for compliance and oversight. By measuring both short-term and long-term damages, organizations can develop more targeted mitigation strategies that preserve accuracy and public confidence.

What are AI hallucinations?

Hallucinations are instances where an AI model generates information that's either factually incorrect or fails to follow your instructions. These AI fabrications can range from minor inaccuracies to completely invented facts presented as truth.

Why do hallucinations happen? Well, most articles on the topic will give you this short answer: LLMs are built to be prediction machines; they're trying to predict what should come next. They might be grammar whizzes, but the inherent randomness baked into them means they sometimes string together plausible-sounding nonsense. According to recent studies, even the most advanced AI models still have hallucination rates of approximately 3-5%, showing this remains a persistent challenge.

That's fine for a TL;DR, but the long answer is much more informative. If you understand it, you'll be better able to spot places where hallucinations are most likely to crop up—and make sure your AI-assisted work doesn't make international news.

Advanced technical methods to prevent AI hallucinations

Strategies like retrieval-augmented generation (RAG) combine text generation with external knowledge bases, ensuring outputs stay grounded in factual data [RAG approach]. Data governance is equally vital, as implementing structured protocols around data sourcing, cleaning, and relevance can limit the emergence of fabricated information [Value of data governance]. These approaches address the root causes of hallucinations by minimizing bias and improving the model’s overall reliability. Additional steps, such as using reflection techniques and prompt engineering, have shown measurable success in managing complex queries without generating false responses [Prompt engineering outcomes]. Organizations adopting these methods see fewer hallucinations and greater user confidence, particularly in high-stakes industries like healthcare and law. As new AI research continues to emerge, experts project ongoing refinements that will further reduce hallucination rates.

The two types of AI hallucinations explained

It's tempting to think of hallucinations as just incorrect "facts," but researchers have identified two main categories of hallucination problems: factuality and faithfulness.

Factuality issues are, as the name suggests, all about getting the facts wrong. This can happen in a couple of ways. These might be factual inconsistencies, where the LLM provides information that contradicts verifiable facts. For example, the LLM might say that the first person to land on the moon was Yuri Gagarin when it was Neil Armstrong. Similarly, a factual fabrication is where the LLM will generate a plausible output that is ungrounded in facts. For example if asked to summarize a fictional scientific article or made-up historical event, or even something impossible like the origin of unicorns, it will generate plausible-sounding fictions. A real-world example occurred in 2023 when a lawyer used ChatGPT for research and inadvertently cited non-existent legal cases in court documents.

A faithfulness hallucination is where the LLM doesn't carry out your instruction correctly. The most obvious type of faithfulness hallucination is instruction inconsistency. This is when the LLM's response ignores your prompt completely. For example, the prompt "Translate the following English question into Spanish: 'What is the capital of France?'” might cause the LLM to answer "The capital of France is Paris." rather than doing the translation.

It might seem like faithfulness hallucinations are obvious to detect, but they can introduce subtle errors, too. For instance, the LLM might follow an instruction, but disregard the information you provide and add its own, resulting in an error in the response. These context inconsistency hallucinations are especially dangerous when doing summarization—it might add details that are in its knowledge bank and not in the actual document you're trying to summarize. Not great for citing your sources.

The last type of faithfulness hallucinations crop up when the LLM is trying to solve logical problems: logical inconsistency hallucinations. They happen when the LLM is reasoning, but makes an error at one step in the chain, then carries the error through, resulting in an incorrect response.

Sometimes the LLM can even give you a mix of factual and faithfulness hallucinations. For example, this response to a prompt for “books with really long words in their titles” has some real titles that have only moderately long words in them, made-up books, and books with long titles.

Why do AI hallucinations happen? The technical explanation

Now that you know what kinds of hallucinations can happen, you can find out how they relate to the training and architecture of LLMs—their so-called capability acquisition process.

LLMs typically go through three stages of training, each of which can introduce quirks that lead to hallucinations.

The first stage is pretraining, where the model learns to predict the next token (word or part of a word) in a sequence. This depends on data, and lots of it. Flaws in the depth or breadth of the dataset can mean its ability to address certain topics is limited or unstable, leading to plausible fictions. You might have knocked up against these flaws already. All models have training cut-offs already, where the AI's knowledge is limited to before a certain date. This can cause it to misreport recent developments.

Data quality issues and biases can cause incorrect information to be encoded in the LLM at this stage too. For instance, the idea that Thomas Edison invented the light bulb is incorrect, but commonly believed (and repeated). Older LLMs might repeat this factual error when prompted.

Plus, we actually know exactly how LLMs store factual knowledge. Studies suggest that the LLMs don't understand factuality at all—instead, they rely on the structure of the training data. That means, unfortunately, that they might not even use the knowledge they have effectively to answer prompts.

Another issue comes from how LLMs generate responses. The way AI processes and generates text token by token in a sequence means that LLMs don't "understand" the content as a whole. Instead, they predict the next token based on patterns seen during training. They only look forward in the sequence, which can lead to errors, especially in tasks with specific constraints, like requirements on words or letters, or analysis based on them. If you've noticed how hard it is for LLMs to stick to a word count, that's why.

Pretraining optimizes the model for text completion, where it continues generating text from an initial seed. But LLMs are now more often used in input-response scenarios, where they're asked to complete tasks or answer questions directly. This is where the last two stages, Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), come into play. These stages help the AI adapt to a back-and-forth response format and generate human-preferred responses.

But SFT doesn't cover every possible prompt, especially poorly formed ones. That boosts the likelihood of hallucinations with unusual inputs. RLHF can cause models to develop sycophancy—a tendency to generate responses that please the user rather than those that are truthful. This is because both humans and preference models often like agreeable answers more than accurate ones.

Each of these stages introduces different kinds of hallucinations, and now that you understand the training process, you'll get better at spotting them when they happen.

How to identify and prevent AI hallucinations

You might be wondering, "Can't we just create a hallucination detector?" If only it were that simple! Unfortunately, there's no foolproof way to automatically catch AI hallucinations. The most reliable method is still good old-fashioned fact-checking against trustworthy external sources—including that supercomputer between your ears. This challenge has significant implications for AI trust and governance, as organizations must implement robust verification systems when deploying AI tools.

I know, I know—AI is supposed to make things faster and easier, and fact-checking everything you get from an LLM isn't exactly that. But don't lose hope! While we can't eliminate hallucinations entirely (experts predict they'll remain a challenge through at least 2025), we can get better at spotting them. Two key strategies are: 1) recognizing situations where an LLM is likely to hallucinate, and 2) using specific prompting techniques to reduce them.

Here are my top tips for staying vigilant against hallucinations:

Understand AI training data limitations

Remember, the LLM doesn't know everything, particularly when it comes to recent developments or niche topics, so when asking for information that's unusual or highly specific—like detailed technical data—be on high alert for hallucinations. If your query involves something that might not be well-represented in the AI's training set, or if it's about events after the model's training cutoff date, there's a higher chance of the AI fabricating information. LLMs can also reflect biases present in their training data, so be cautious of responses that seem to align too closely with common stereotypes or misconceptions.

Use clear prompting

Good prompting doesn't need fancy techniques, but it does need precision. Clear, well-formed prompts help minimize hallucinations. It's tempting to fire off a quick thought, but taking an extra moment to craft your prompt can save you from hallucination headaches later. Using data templates, limiting response length, and providing explicit constraints can all help reduce hallucinations. That said, if you are working in a hallucination-prone area, it might be worthwhile to brush up on advanced prompts for accuracy.

Avoid requests requiring holistic thinking

Many hallucinations stem from the way AI processes and generates language. It operates on tokens, predicting the next one based on what came before. This forward-looking, token-by-token approach can lead to errors because the AI lacks a holistic understanding of the text. Be especially wary of responses that would normally require the AI to "look back" at what it's already generated—it can't do that, so these responses are more prone to hallucinations.

Identify plausible-sounding AI nonsense

I teach a university AI course and this is where my students often get tricked. The model's training to predict the next most likely token can sometimes result in text that sounds convincing but doesn't actually make sense upon closer inspection. So make sure you read everything the AI generates closely!

Question suspiciously perfect AI responses

Thanks to their RLHF training, LLMs can be real people-pleasers. If an answer seems suspiciously perfect or exactly what you wanted to hear, it might be the AI's "eager to please" behavior kicking in. These sycophantic responses prioritize making you happy over being accurate, so give them an extra dose of skepticism.

Verify AI-generated summaries

Summarization is one of the most helpful features of LLMs and it's a drag that the only real answer about how to check whether the summary is accurate is to read the probably lengthy document it comes from. Anytime you're asking for a summary, be extra vigilant about double checking the information with the original source.

Cross-check key facts

Always verify crucial information provided by the AI. This is especially important for names, dates, statistics, and other specific data points. Unless explicitly provided in the prompt, very precise numbers or statistics are often hallucinated, as the pretraining process doesn't focus on memorizing exact figures. And, to avoid the same issue as our lawyer friend above, if the AI mentions a specific source, take the time to look it up to ensure it exists.

Verify each step in AI reasoning processes

The pretraining process doesn't explicitly teach logical reasoning. When an LLM performs multi-step reasoning, each step introduces a chance for error. In these cases, you might benefit from using another tool—like asking the LLM to write code rather than answer in text. Make sure to verify each step independently.

The future of AI hallucinations

Hallucinations are, unfortunately, one of the biggest issues with working with AI. Although we're seeing pretty good progress in minimizing them through techniques like retrieval-augmented generation (RAG) and improved training methods, if you work with AI tools a lot, you're inevitably going to find some cropping up. If you're still not satisfied with your hallucinations, check out my tips on how to reduce hallucinations.

FAQs

What can cause AI hallucinations?

AI hallucinations often arise from limitations in training data or the model’s architecture, which can lead to fabricated outputs [Analysis of hallucinations]. They can also occur when an AI attempts to please the user with agreeable answers instead of focusing on factual accuracy [Human feedback study]. Additionally, poor prompt engineering or queries about niche topics can increase the likelihood of fabricated information. Strict data governance and advanced techniques like retrieval-augmented generation help mitigate these issues. Ongoing research aims to reduce hallucination rates through continual model refinement and more robust training methodologies.

How can I mitigate AI hallucinations for crucial tasks like healthcare or legal advice?

In high-stakes fields, best practices include implementing strict data governance to ensure reliable input sources [Data governance benefits]. Techniques like retrieval-augmented generation enhance factual grounding by cross-checking external databases. Conducting thorough fact-checking and using specialized models or domain experts in the loop further reduces errors. Regular audits and continuous monitoring also help capture emergent issues early. By adopting these strategies, professionals can significantly lower the risk of harmful AI hallucinations.

Featured articles:

No items found.

Articles you might find interesting

AI for Creators

5 ways AI smooths out my freelance video workflow

As a freelance creator, here are 5 things AI does that have made my life easier, my hours longer, and allowed me to focus on what I do best — being creative.

AI for Creators

How to describe your writing voice for AI tools: Tips from a professional ghostwriter

There is a ton of potential in AI text-generation tools like ChatGPT, as long as you can tell it how to write. And to do that, you need to know what it is that makes your own voice unique. 

Podcasting

How I batched a summer's worth of podcast and YouTube content in 2 weeks with Descript

Courtney Kocak was so overwhelmed by her three podcasts and YouTube channels that she found it hard to focus on anything else — but she came up with a great solution.

Podcasting

Remote audio recording: Tools and tips (2024)

With the right equipment and software, learn how you can capture high-quality audio from anywhere in the world were sitting in the same recording studio.

Video

YouTube transitions that will level up your video

Almost every piece of media we consume has some kind of transition, whether we notice it or not — and not noticing is usually the goal.

Related articles:

Share this article

Get started for free →
Made in Webflow