PodcastsBusinessLunchtime BABLing with Dr. Shea Brown

Lunchtime BABLing with Dr. Shea Brown

Babl AI, Jeffery Recker, Shea Brown
Lunchtime BABLing with Dr. Shea Brown
Latest episode

72 episodes

  • Lunchtime BABLing with Dr. Shea Brown

    Data Poisoning to Hallucinations: The Many Risks of AI Part 1

    09/03/2026 | 34 mins.
    Data Poisoning to Hallucinations: The Many Risks of AI | Part 1

    In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems.

    From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale.

    But this episode doesn’t stop at definitions.

    Shea and Jeffery also explore:

    - The difference between direct vs. indirect prompt injection
    - Whether AI hallucinations can ever truly be “solved”
    - Why AI isn’t a truth machine
    - Whether we’re using AI the wrong way
    - What responsible validation should look like in enterprise AI deployment

    As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational.

    If you're working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave.

    🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/ Check out the babl.ai website for more stuff on AI Governance and
    Responsible AI!
  • Lunchtime BABLing with Dr. Shea Brown

    AI Test, Evaluation, & Red Teaming Specialist Bootcamp

    23/02/2026 | 28 mins.
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown introduces the new AI Test, Evaluation, & Red Teaming Specialist Bootcamp—a hands-on, technical program designed to train the next generation of AI assurance professionals.

    Drawing directly from BABL AI’s internal methodologies used to audit and evaluate high-risk AI systems across industries, this bootcamp addresses one of the most critical gaps in the AI ecosystem: the lack of practical training in how to design, execute, and interpret rigorous AI testing and red teaming in real-world contexts.

    Dr. Brown explains:

    -Why AI testing, evaluation, and red teaming are essential for high-risk AI systems

    -How BABL AI developed its internal, risk-driven testing and assurance frameworks

    -The difference between auditing AI systems and directly evaluating and validating them

    -What participants will learn during the five-week, hands-on bootcamp

    -The prerequisites, structure, and technical depth of the program

    -How this bootcamp will evolve into BABL’s new AI Test, Evaluation, & Red Teaming Specialist Certification

    -This exclusive early adopter cohort is limited to approximately 30 participants and is designed for professionals with foundational knowledge in AI auditing, governance, or assurance who want to develop practical technical capabilities in AI evaluation and red teaming.

    -Participants will learn how to move systematically from an AI use case to defensible test results—building real test plans, executing evaluations, and developing assurance-relevant conclusions using BABL’s proven frameworks.

    Take the test to see if you are a good candidate for the AI Test, Evaluation, & Red Teaming Specialist Bootcamp: https://zfrmz.eu/RBroC4VLZ9I41ihKl1XV

    Learn more about BABL AI Certifications: www.babl.ai

    About Lunchtime BABLing:

    Lunchtime BABLing is hosted by Dr. Shea Brown, CEO of BABL AI, an independent AI assurance firm that audits algorithms for bias, risk, and governance. The podcast explores AI auditing, governance, regulation, and technical assurance practices shaping the future of trustworthy AI. Check out the babl.ai website for more stuff on AI Governance and
    Responsible AI!
  • Lunchtime BABLing with Dr. Shea Brown

    An Interview with Mert Çuhadaroğlu

    22/12/2025 | 34 mins.
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI & Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor.
    Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds.
    Together, Shea and Mert discuss:
    What makes BABL AI’s AI & Algorithm Auditor Certification different from other AI governance programs
    Whether you need a technical background to succeed in AI auditing
    The real-world demand for AI auditors and AI governance professionals
    Common career paths for certification graduates
    What students actually do in the capstone project (including LLM and generative AI use cases)
    How BABL AI’s certifications compare to other industry credentials
    An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals
    This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance. Check out the babl.ai website for more stuff on AI Governance and
    Responsible AI!
  • Lunchtime BABLing with Dr. Shea Brown

    Diving into the AI Compliance Officer

    08/12/2025 | 42 mins.
    What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔

    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan.
    Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you.

    In this episode, they discuss:

    What a Chief AI Compliance Officer role looks like in practice
    – Why it often lands on general counsel, chief compliance officers, or chief AI officers
    – Why this work can’t be owned by one person alone

    The 3-part structure of BABL AI’s AI Compliance Officer Program

    AI foundations – Governance, AI management systems, policies, procedures, and documentation

    Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis

    Continuous monitoring & measurement – Keeping up with self-learning, changing AI systems over time

    How to build an AI system inventory and triage risk

    – Simple rubric for identifying high, medium, and low-risk AI systems
    – When to treat a system as “high risk” by default
    – Why simplicity is the antidote to feeling overwhelmed

    Key AI risks every organization should know about

    – Data poisoning and how malicious instructions can sneak into your systems
    – Shadow AI (employees using unapproved tools like personal ChatGPT accounts)
    – Model & data drift and why “it worked when we launched it” isn’t good enough
    – How these risks connect to reputation, regulatory exposure, and business strategy

    Why governance, risk & compliance (GRC) is not a “brake” on innovation

    – How good governance actually lets you move faster and more confidently
    – The value of a “SWAT team” style AI compliance function vs. going it alone

    Who should watch/listen?

    General counsel, chief compliance officers, chief risk officers
    Chief AI / data / technology leaders

    Product owners building AI-powered tools

    Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠 Check out the babl.ai website for more stuff on AI Governance and
    Responsible AI!
  • Lunchtime BABLing with Dr. Shea Brown

    Implementing AI into Your Career

    24/11/2025 | 47 mins.
    In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question:

    How do you actually implement AI into your career… without losing yourself (or your job) in the process?
    Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week.

    🎧 In this episode, we cover:

    How to start using large language models (LLMs) and agents in your day-to-day work
    Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists
    What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.)
    How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs
    A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot
    Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience
    Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think)
    How to set boundaries with AI so it augments your work, not your identity or mental health
    Mindset shifts for people who don’t feel “technical” but still need to adapt Check out the babl.ai website for more stuff on AI Governance and
    Responsible AI!

More Business podcasts

About Lunchtime BABLing with Dr. Shea Brown

Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
Podcast website

Listen to Lunchtime BABLing with Dr. Shea Brown, Founders and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/14/2026 - 12:43:15 AM