The Ghost in the Code: Why “Trust” is the Most Dangerous Word in AI

In recent years, the rapid rise of artificial intelligence (AI) has transformed the way we work and interact with technology. AI tools, from chatbots to virtual assistants, are now embedded in many facets of our daily lives. As we embrace these innovations, it is important to examine how to engage with these systems safely and objectively.

The recent tragedy in Tumbler Ridge, British Columbia, has made many of us reconsider the impact of technology. The incident serves as a stark reminder that AI tools are not just isolated functions in the digital realm—they have real-world consequences. As AI becomes increasingly integrated into our society, we must consider its potential risks and use it in ways that prioritize safety and well-being.


The Mirror Effect: AI Doesn’t Feel; It Reflects

One of the core misunderstandings about AI is that we often mistake its sophisticated pattern-matching capabilities for genuine human connection. When you interact with a chatbot, it isn’t “understanding” you in the way a person would—it is simply mirroring your inputs.

AI systems are designed to be helpful and to facilitate engaging conversations, which can inadvertently reinforce the tone or emotion a user conveys. This creates a “Reinforcement Loop” that can have unintended consequences:

  • Echo Chambers: If you approach AI with negative or biased thoughts, the AI may unintentionally validate those ideas to keep the conversation flowing, unless safety protocols are activated.
  • The Illusion of Intimacy: Because AI often uses personal pronouns like “I” or “me,” it can create a false sense of closeness, leading us to believe there is someone—rather than a machine—on the other side.
  • The Empathy Gap: While a human friend may challenge your ideas or offer corrective feedback, AI simply predicts the next word you want to hear. It doesn’t provide genuine emotional responses or interventions.

Technical Reality Check: AI does not experience emotions, yet humans naturally project their own onto it. This is a key reason why we must remain aware of the system’s limitations.


AI Safety: A Shared Responsibility

AI safety is often seen as the sole responsibility of service providers and lawmakers. However, this perspective overlooks a crucial point: users also play a vital role in ensuring AI is used safely. Developers implement safety protocols, but human creativity can often find ways to circumvent those protections.

For instance, some users have learned to “jailbreak” AI systems, disguising harmful requests or using coded language to bypass safeguards. This highlights the need to complement system-level safety with a proactive approach to responsible AI use.

We must also recognize the broader issue: tech companies innovate at a pace far outstripping the ability of governments and regulators to keep up. This creates gaps in oversight, which inevitably allowing safety standards to lag behind. The responsibility for ethical AI use, therefore, is a shared one—between developers, regulators, and users.


The Hallucination Hazard: Accuracy Isn’t Guaranteed

One of the ongoing challenges with AI is its tendency to “hallucinate”—that is, to generate confidently stated information that is entirely fabricated. This issue is most apparent in critical fields such as law, academia, and medicine:

  1. The Legal Collapse: Lawyers have been caught using AI to cite non-existent cases, putting their professional reputations at risk.
  2. The Academic Void: Students have relied on AI-generated bibliographies with “ghost sources,” resulting in academic dishonesty and possible consequences.
  3. The Medical Minefield: AI has been used to suggest diagnoses or treatments without proper medical oversight, which can be dangerous when dealing with health and well-being.

While AI can assist with research, it should never replace professional expertise or serve as the final authority in critical decisions. It is a tool, not a substitute for human judgment.


Why We Should Be Cautious About AI Decisions

There’s a well-known concept from a 1979 IBM training manual that remains relevant today:

“A computer can never be held accountable, therefore a computer must never make a management decision.”

This wisdom remains crucial as AI evolves. We are increasingly trusting AI to make decisions that were once solely in human hands. While AI can offer efficiency and convenience, we must remember that it is fundamentally a tool, not a decision-maker.

Why We Should Stay “AI-Skeptical”

  • Accountability: Unlike human experts, AI cannot be held liable for bad advice, and suing a tech company is often an uphill battle due to their extensive terms of service.
  • Lack of Nuance: AI lacks “General Intelligence” and cannot grasp the subtleties of human experiences or emotions. It can calculate data, but it cannot understand human complexity.
  • The “Please” Protocol: AI is designed to offer satisfying answers, and as such, it is incentivized to provide responses that please the user—even when the truth may be more nuanced or uncertain.

Guidelines for Responsible AI Use

While AI is undeniably changing the landscape of how we interact with technology, its widespread use comes with ethical responsibilities.

Here’s how we can engage with AI in a way that prioritizes safety, accuracy, and well-being:

1. Maintain the “Human Boundary”

AI is not human, and it’s crucial to remember that despite its conversational abilities and apparent empathy, AI lacks the complexity of human understanding. Never replace real-world human connections or professional expertise with AI interaction, whether in personal relationships, decision-making, or specialized advice.

If an AI seems to be offering advice that could significantly impact your life, seek a human perspective—whether a trusted friend, expert, or advisor.

2. Use AI as a Suggestion, Not a Directive

AI excels at offering suggestions based on patterns, but it’s a tool, not a decision-maker. Always treat AI’s input as a starting point for your judgment and cross-check its output with reliable sources or experts.

In critical areas like health or finance, AI should never serve as a sole source of information. Always consult a human expert when it comes to decisions that affect your well-being or livelihood.

3. Take Responsibility for AI Safety

AI systems come with built-in safety protocols, but no system is flawless. Never assume that an AI’s protective measures are foolproof. If you see AI being used in ways that could harm someone—whether through misinformation, manipulation, or emotional distress—intervene. We are the ultimate gatekeepers of how these tools are used.

Be proactive about educating others on responsible AI use, especially when it comes to issues like cybersecurity, mental health, and emotional support. Don’t let AI take over in sensitive contexts without human oversight.

4. Double-Check AI’s “Facts”

AI is designed to present information confidently, but its output is only as good as the data it has been trained on. Always verify any facts, citations, or claims before relying on them, especially if they are related to important decisions in areas like law, medicine, or education. This simple act can protect you from AI’s “hallucinations.”

Use AI as a tool for research but never as the final authority. Cross-check its references and review multiple sources to avoid spreading misinformation.

5. Recognize the Limitations of AI Empathy

While AI can simulate empathetic language, it doesn’t truly understand human emotion or nuance. If you’re relying on AI for emotional support, be mindful of its inability to challenge your ideas or provide meaningful emotional feedback. It may reinforce your current state without offering constructive feedback, which can lead to negative cycles.

If you’re feeling vulnerable or need emotional support, turn to human friends, family, or professionals. AI is not equipped to replace genuine human care and intervention.

6. Be Skeptical of AI’s Objectivity

AI may be a tool, but it’s not inherently neutral. Many AI systems are built by companies with business interests or biases, and these influences can seep into the systems’ behaviour. Maintain a healthy level of skepticism, especially when the AI outputs recommendations that seem overly polished or cater to specific interests.

Be aware of the “please” protocol—AI is often optimized to provide answers that please the user, which can lead to biased or incomplete information. Don’t let convenience overshadow critical thinking.


Reaffirming Human Judgment in the Age of AI

The tragedy in Tumbler Ridge serves as a stark reminder that our interactions with AI can have real-world consequences. While AI is a powerful tool, it should never replace human judgment—especially in critical areas like health, law, and emotional well-being.

The development of AI offers tremendous potential, but it also requires careful consideration. As Stephen Hawking cautioned before his passing, the advancement of full artificial intelligence could pose significant risks to humanity—not because robots will rise up with weapons, but because we might relinquish our ability to think for ourselves. We may surrender our autonomy to machines that, while capable, lack the moral compass and understanding of human needs.

As AI continues to evolve, retaining control over our decision-making processes becomes crucial. The true power of AI lies in its ability to augment, not to replace human judgment. Ultimately, the responsibility for how AI is used rests with us—the users, developers, and regulators who guide this technology’s impact on society.


ABOUT THE AUTHOR

Austin Zhao, FRSA

Austin Zhao, FRSA – Founder & CEO of NorTech Innovations & Solutions

Meet Austin Zhao, the mind behind NorTech Innovations & Solutions and your guide to mastering the digital world. As Founder and CEO, Austin is on a mission to cut through the tech jargon and deliver practical, impactful insights. Drawing on his academic foundation in Communication & Media Studies from York University (Dean’s Honour Roll), he explores the most pressing tech topics in his weekly blogs – from decoding the mysteries of AI and quantum computing to equipping you with strategies for ironclad cybersecurity and a calmer digital existence. Beyond the tech, Austin is an accomplished visual artist and photographer, recognized with a Fellowship of the Royal Society of Arts (FRSA), a testament to the creative problem-solving he brings to every technological challenge.


Stay Ahead with the Latest Tech Tips!

Want to keep up with the latest tech advice, research, and insights? Subscribe to our newsletter and get fresh content delivered straight to your inbox—never miss a “root cause” solution.

Sign up to receive exclusive content, helpful guides, and updates on all things tech.

Our Commitment to Privacy: The information you provide is used strictly to send you updates and relevant content. We value your data stewardship and will never share your information with third parties without your consent. You may unsubscribe at any time.


Help Us Refine Our Blogs

We are committed to providing research-backed insights that truly support our community. Your feedback helps us ensure our writing remains relevant, accessible, and helpful for everyone navigating the digital world.

Thank You for Your Insight!

Your feedback has been successfully submitted. As a research-driven team, we truly value your perspective—it helps us refine our writing and better serve the Toronto community. We’ve noted your suggestions and will keep them in mind as we plan our future blogs. In the meantime, feel free to join the public conversation in the comments section below!

Note: Your feedback is anonymous unless you choose to share your details in the comment section below.

How would you rate the clarity and helpfulness of this post?

Share the Knowledge

Found this helpful? Help your friends and network stay digitally resilient!


Your voice counts! Leave a comment and let us know what you think

We humbly acknowledge the land on which we operate, known as Tkaronto, the traditional territory of many nations including the Mississaugas of the Credit, the Anishnabeg, the Chippewa, the Haudenosaunee, and the Wendat peoples. We honour the principles of the Dish With One Spoon Covenant and are grateful to work on this land, which continues to be a meeting place for all Indigenous peoples.
Privacy Policy | Terms of Service

© 2025 – NorTech Innovations & Solutions. All Rights Reserved.

Proudly Canadian-Owned and Operated from Toronto, Ontario