Powered by RND
PodcastsTechnologyLunchtime BABLing with Dr. Shea Brown

Lunchtime BABLing with Dr. Shea Brown

Babl AI, Jeffery Recker, Shea Brown
Lunchtime BABLing with Dr. Shea Brown
Latest episode

Available Episodes

5 of 61
  • The Importance of AI Governance
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage. They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes. Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI's new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills. If you're interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    --------  
    40:39
  • Ensuring LLM Safety
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    --------  
    27:58
  • Explainability of AI
    What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think. From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a “good enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: 🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ 🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    --------  
    34:19
  • AI’s Impact on Democracy
    In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems. - What happens when personalized content becomes political propaganda? - Is YouTube the new social media without us realizing it? - Can regulations keep up with AI’s accelerating influence? - And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity? This episode dives into: - The unintended consequences of algorithmic curation - The collapse of objective reality in the digital age - AI-driven misinformation in elections - The tension between regulation and free speech - Global responses—from Finland’s education system to the EU AI Act - What society can (and should) do to fight back Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss. 🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    --------  
    45:51
  • AI Literacy
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world. Topics covered: The evolution of AI education and BABL AI’s new subscription model for training & certifications. Why AI auditing skills are becoming essential for professionals across industries. How AI governance roles will shape the future of business leadership. The impact of AI on workforce transition and how individuals can future-proof their careers. The EU AI Act’s new AI literacy requirements—what they mean for organizations. Want to level up your AI knowledge? Check out BABL AI’s courses & certifications! 🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    --------  
    20:55

More Technology podcasts

About Lunchtime BABLing with Dr. Shea Brown

Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
Podcast website

Listen to Lunchtime BABLing with Dr. Shea Brown, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.2 | © 2007-2025 radio.de GmbH
Generated: 5/16/2025 - 8:05:13 AM