AI Isn’t Always Right: Why You Must Be the Last to Verify
Prof. Moustapha Diack: A critical message for faculty, administrators, and staff!
Introduction
As AI tools become more deeply embedded in higher education—from course design and grading to administrative planning and student services—we must also advocate for responsible use, and never lose sight of one essential truth:
AI systems can make mistakes. And often, they do—confidently.
In our push to integrate AI into education, administration, and daily life, we must also advocate for responsible use. The video below is a powerful illustration of how artificial intelligence, despite its impressive capabilities, can **mislead**, **hallucinate**, and produce **false content**—often with high confidence.
In the context of artificial intelligence, particularly generative AI like ChatGPT, hallucination refers to:
The generation of false, misleading, or fabricated content that appears plausible or authoritative, but is not based on real data or facts.
🧠 Examples of AI Hallucination:
Inventing sources or citations that do not exist.
Misquoting facts or events, even when prompted correctly.
Producing convincing but incorrect summaries of academic papers or news articles.
Creating fictional characters, locations, or scenarios when asked for real information.
⚠️ Why It Matters
Hallucinations can be dangerous in:
Education – misleading students or researchers.
Healthcare – generating inaccurate medical information.
Business/legal settings – offering fabricated policies, contracts, or regulations.
✅ How to Mitigate It
Verify AI outputs with trusted human sources.
Use citation-based tools (e.g., Perplexity.ai, Scite.ai, Consensus).
Avoid blind trust in long, confident responses from AI.
Use Case: Using Google AI Studio
Before diving into the case study featured in this publication, we begin by showcasing Google AI Studio—a powerful and user-friendly platform that exemplifies the next generation of AI interfaces.
The screenshot below illustrates the clean, intuitive environment of Google AI Studio, highlighting its advanced features such as:
Gemini 2.5 Pro integration
Real-time audio-to-audio dialog
Native speech and image generation
URL context tools for grounded responses
Adjustable “thinking mode” and code execution
These functionalities make Google AI Studio one of the most accessible and powerful tools for educators, researchers, and administrators exploring AI integration in teaching, learning, and decision-making.
The Case: AI isn’t Always Right
In the video below, you’ll see a striking example of how even the most advanced AI models can hallucinate information—fabricating facts, misinterpreting data, or producing output that seems correct but is completely false.
🔁 Watch carefully, and keep in mind: **humans must remain the final checkpoint.**
Take Aways
Key Messages:
✅ AI can assist, but it’s not always correct.
⚠️ AI sometimes hallucinates — inventing citations, facts, and logic that don’t exist.
👁️🗨️ You must verify everything it outputs.
🧑💼 Educators, administrators, and students must be trained to critically evaluate AI-generated content.
AI Readiness & Adoption POLL
As we continue to explore the benefits and risks of AI in higher education, we’d love to hear from you. Please respond to the quick poll below and let us know where you stand.
Professor Moustapha Diack
Professor, SMED Doctoral Program – Southern University, Baton Rouge
Executive Director – Digital Education Consulting (DEC)He leads global initiatives in AI for education, focusing on AI literacy, immersive learning, and justice-aligned innovation.
🔗 Learn more: diackresearch.net/interests