Is the AI Telling You the Truth? Discerning the Perils of the Generative Age

The world, it seems, has indeed gone a little wild.

A decade ago, the idea that a machine could conjure up text so sophisticated it could fool a professional into submitting fabricated legal citations to a high court, or convince a reader a nonexistent book was the perfect summer read, would have been pure science fiction.

Today, this is simply the Tuesday morning news cycle. Large Language Models (LLMs) like ChatGPT, Gemini, and the AI summaries popping up on our favourite search engines are incredible tools, delivering information and solving problems at lightning speed.

But with this incredible power comes a critical, almost existential question that we, as savvy tech users and information consumers, must ask: Do you trust the output from large language models?

The increasing frequency of AI “hallucinations“—those confident, plausible-sounding falsehoods—is a profound technological challenge and a serious test of our collective information literacy. The examples are piling up, and they are stark.


The Hallucination Effect: Case Files of Fictional Facts

An LLM’s primary function is not to be a repository of truth but to be a plausibility engine. It generates text by statistically predicting the next most likely word in a sequence based on the vast data it was trained on. It doesn’t “know” a fact; it merely calculates the most convincing way to present a response. When it doesn’t have the correct answer, it often doesn’t hesitate—it improvises, and the result is a sophisticated lie.

The real-world consequences are moving beyond novelty and into professional and public life with alarming speed.

  1. The Phantom Book List

Imagine the scene: an enthusiastic patron walks into a library asking for a hot new novel on a local summer reading list. The librarian searches, but the book is nowhere to be found. Why? Because the list, which appeared in an advertorial supplement to the Chicago Sun-Times, contained titles and summaries dreamed up by artificial intelligence 1. The list, created by a freelancer who admitted to using AI and failing to fact-check the output, included at least ten non-existent books attributed to real authors like Isabel Allende and Andy Weir 2.

Librarians across the US have since reported fielding requests from patrons for these and other AI-hallucinated books. These professionals now find themselves having to act as fact-checkers against AI’s fictional canon, sometimes encountering patrons who are defensive or skeptical when told their AI-recommended title is not real.

  1. The Courtroom Conundrum

Perhaps the most high-stakes examples are occurring in the legal world. Courts in both the US and the UK have been forced to intervene after lawyers submitted briefs citing nonexistent case-law fabricated by generative AI 3.

The most widely reported case is Mata v. Avianca, Inc., where a US lawyer submitted a court filing that included at least six fabricated case citations generated by ChatGPT 4. The lawyer and his firm were subsequently sanctioned with a $5,000 fine by a federal judge for citing the bogus cases and then attempting to cover up their use of AI 5. Similarly, in the UK, a High Court judge warned lawyers about the misuse of AI after multiple cases were blighted by fake case-law citations, including one case where 18 out of 45 citations were fabricated 6.

  1. The Invisible Button Dilemma

On a more relatable, everyday level, many of us who use AI for troubleshooting have encountered a subtler form of hallucination. You ask the AI to help you fix a software problem, and it provides a step-by-step guide: “Click the ‘Advanced Settings’ tab, then click the ‘Optimize Performance’ button.” You follow the instructions religiously, but that button? It’s not there. This is a clear-cut case of an AI generating a plausible-sounding instruction based on general knowledge or out-of-date information, but the specific detail is a fiction. It didn’t lie out of malice, but out of a statistical imperative to complete the sentence with the most probable-sounding word combination.


The Illusion of Authority: Why We’re Prone to Trust

Why are so many people, particularly younger generations, quick to trust these outputs, even over human expertise?

The first reason is the Fluency Bias. LLMs are masters of language. Their outputs are grammatically perfect, logically structured, and delivered with unwavering confidence. This high-quality presentation tricks our brains into equating polished prose with factual accuracy. It sounds like an expert wrote it, so we believe it.

Secondly, there is the All-Knowing Oracle Effect. AI has been positioned as an all-powerful digital oracle. The sheer volume of training data suggests it holds all the world’s knowledge. This perception leads to a diminished sense of critical evaluation, especially when a person is looking for a quick, definitive answer. The AI is fast, comprehensive, and doesn’t make you feel stupid for asking.

Finally, the Blurring of Information Boundaries is a serious factor. When a search engine integrates an AI-generated summary right at the top of the results page, the lines between verified, citable source material and a synthesized, statistically probable answer become dangerously blurred. We see the summary as an ultimate answer, not a generated hypothesis.


A Professional’s Guide to Responsible AI Use

We can’t—and shouldn’t—stop using these powerful tools. They are transforming productivity. But we must become more skeptical, more rigorous, and more information-literate than ever before. Our professional reputation, and frankly, the coherence of public knowledge, depends on it.

  1. Maintain the “Verify Everything” Protocol

You are the final, essential link in the quality control chain. For facts—whether the AI is giving you a definition of encryption, an historical date, or a scientific claim—always cross-reference the information with two or three independent, verified sources (academic journals, official government/organizational websites, or established news publications). Most critically, never include an AI-generated citation—legal, academic, or otherwise—without manually locating and reading the original source document. As the UK High Court ruling warned, AI’s responses “may make confident assertions that are simply untrue. They may cite sources that do not exist”. 7

  1. Master the Prompt, Demand the Source

Your prompts should be designed to push back against hallucination. Be a demanding boss, not a passive listener. Insist on grounding by adding instructions like: “Only use information from sources verifiable by a link.” or “If you cannot find a verified source, state that the information is hypothetical.” Furthermore, ask for the Chain of Reasoning, instructing the AI to “Show me the logical steps you took to arrive at this answer” to make its internal ‘thinking’ transparent and easier to scrutinize for logical leaps or fabricated data points.

  1. Contextualize Troubleshooting

When using AI to debug or troubleshoot (e.g., finding that missing button), be overly specific about your environment. Instead of a vague query like “How do I change my settings?”, ask “How do I change my settings in [Software Name] version 7.5 on a Mac running Sonoma 14.2?” The more context you provide, the less room the model has to fill in the blanks with hallucinations.


The Verdict: Trust, But Verify—Vigorously

LLMs are extraordinary calculators of language probability, but they are not truth-tellers. They don’t have intent, they don’t have conscience, and they don’t fear a fine from a high court. They will confidently and coherently lie to you if it makes for a more plausible-sounding sentence.

The lesson of the hallucinated books and the fabricated case-law is that the burden of truth falls back on the human user. We must embrace an attitude of vigilant skepticism toward every AI summary, every troubleshooting step, and every “fact.” This new technology doesn’t diminish the need for our human skills—it amplifies the importance of critical thinking, expert knowledge, and rigorous verification.

So, do I trust the output from LLMs? I trust their ability to generate incredibly useful, well-written text. But I don’t trust their factual claims until I’ve verified them myself. That skepticism isn’t a limitation; it’s our ultimate professional safeguard in the age of generative AI.


References

  1. Chicago newspaper prints a summer reading list. The problem? The books don’t exist – CBC News ↩︎
  2. Chicago Sun-Times confirms AI was used to create reading list of books that don’t exist – Guardian News ↩︎
  3. High court tells UK lawyers to stop misuse of AI after fake case-law citations – Guardian News ↩︎
  4. Fake Cases, Real Consequences: Misuse of ChatGPT Leads to Sanctions By Christopher F. Lyon ↩︎
  5. Mata v. Avianca, Inc. – Wikipedia ↩︎
  6. High Court warns lawyers over AI misuse after fake case-law citations ↩︎
  7. HIGH COURT TELLS UK LAWYERS TO STOP MISUSE OF AI AFTER FAKE CASE-LAW CITATIONS ↩︎


ABOUT THE AUTHOR

Austin Zhao, FRSA

Austin Zhao, FRSA – Founder & CEO of NorTech Innovations & Solutions

Meet Austin Zhao, the mind behind NorTech Innovations & Solutions and your guide to mastering the digital world. As Founder and CEO, Austin is on a mission to cut through the tech jargon and deliver practical, impactful insights. Drawing on his academic foundation in Communication & Media Studies from York University (Dean’s Honour Roll), he explores the most pressing tech topics in his weekly blogs – from decoding the mysteries of AI and quantum computing to equipping you with strategies for ironclad cybersecurity and a calmer digital existence. Beyond the tech, Austin is an accomplished visual artist and photographer, recognized with a Fellowship of the Royal Society of Arts (FRSA), a testament to the creative problem-solving he brings to every technological challenge.


Your voice counts! Leave a comment and let us know what you think

Stay Ahead with the Latest Tech Tips!

Want to keep up with the latest tech advice, tips, and insights? Subscribe to our blog and get fresh content delivered straight to your inbox—never miss an update!

Sign up to receive exclusive content, helpful guides, and updates on all things tech.

Your privacy is important to us. The information you provide through our blog and newsletter subscription form will only be used to send you updates, insights, and relevant content. We do not share your data with third parties without your consent. You can unsubscribe at any time. If you have any questions about how your information is handled, feel free to contact us.