PodcastsTechnologyThe Chief AI Officer Show

The Chief AI Officer Show

Front Lines
The Chief AI Officer Show
Latest episode

33 episodes

  • The Chief AI Officer Show

    ACC’s Dr. Ami Bhatt: AI Pilots Fail Without Implementation Planning

    18/12/2025 | 44 mins.

    Dr. Ami Bhatt's team at the American College of Cardiology found that most FDA-approved cardiovascular AI tools sit unused within three years. The barrier isn't regulatory approval or technical accuracy. It's implementation infrastructure. Without deployment workflows, communication campaigns, and technical integration planning, even validated tools fail at scale. Bhatt distinguishes "collaborative intelligence" from "augmented intelligence" because collaboration acknowledges that physicians must co-design algorithms, determine deployment contexts, and iterate on outputs that won't be 100% correct. Augmentation falsely suggests AI works flawlessly out of the box, setting unrealistic expectations that kill adoption when tools underperform in production. Her risk stratification approach prioritizes low-risk patients with high population impact over complex diagnostics. Newly diagnosed hypertension patients (affecting 1 in 2 people, 60% undiagnosed) are clinically low-risk today but drive massive long-term costs if untreated. These populations deliver better ROI than edge cases but require moving from episodic hospital care to continuous monitoring infrastructure that most health systems lack. Topics discussed: Risk stratification methodology prioritizing low-risk, high-impact patient populations Infrastructure gaps between FDA approval and scaled deployment Real-world evidence approaches for AI validation in lower-risk categories Synthetic data sets from cardiovascular registries for external company testing Administrative workflow automation through voice-to-text and prior authorization tools Apple Watch data integration protocols solving wearable ingestion problems Three-part startup evaluation: domain expertise, technical iteration capacity, implementation planning Real-time triage systems reordering diagnostic queues by urgency

  • The Chief AI Officer Show

    Usertesting's Michael Domanic: Hallucination Fears Mean You're Building Assistants, Not Thought Partners

    04/12/2025 | 39 mins.

    UserTesting deployed 700+ custom GPTs across 800 employees, but Michael Domanic's core insight cuts against conventional wisdom: organizations fixated on hallucination risks are solving the wrong problem. That concern reveals they're building assistants for summarization when transformational value lives in using AI as strategic thought partner. This reframe shifts evaluation criteria entirely. Michael connects today's moment to 2015's Facebook Messenger bot collapse, when Wit.ai integration promised conversational commerce that fell flat. The inversion matters: that cycle failed because NLP couldn't meet expectations shaped by decades of sci-fi. Today foundation models outpace organizational capacity to deploy responsibly, creating an obligation to guide employees through transformation rather than just chase efficiency. His vendor evaluation cuts through conference floor noise. When teams pitch solutions, first question: can we build this with a custom GPT in 20 minutes? Most pitches are wrappers that don't justify $40K spend. For legitimate orchestration needs, security standards and low-code accessibility matter more than demos. Topics discussed: Using AI as thought partner for strategic problem-solving versus summarization and content generation tasks Deploying custom GPTs at scale through OKR-building tools that demonstrated broad organizational application Treating conscientious objectors as essential partners in responsible deployment rather than adoption blockers Filtering vendor pitches by testing whether custom GPT builds deliver equivalent functionality first Prioritizing previously impossible work over operational efficiency when setting transformation strategy Building agent chains for customer churn signal monitoring while maintaining human decision authority Implementing security-first evaluation for enterprise orchestration platforms with low-code requirements Creating automated AI news digests using agent workflows and Notebook LM audio synthesis

  • The Chief AI Officer Show

    Extreme's Markus Nispel On Agent Governance: 3 Controls For Production Autonomy

    06/11/2025 | 42 mins.

    Extreme Networks architected their AI platform around a fundamental tension: deploying non-deterministic generative models to manage deterministic network infrastructure where reliability is non-negotiable. Markus Nispel, CTO EMEA and Head of AI Engineering, details their evolution from 2018 AI ops implementations to production multi-agent systems that analyze event correlations impossible for human operators and automatically generate support tickets. Their ARC framework (Acceleration, Replacement, Creation) separates mandatory automation from competitive differentiation by isolating truly differentiating use cases in the "creation" category, where ROI discussions become simpler and competitive positioning strengthens. The governance architecture solves the trust problem for autonomous systems in production environments. Agents inherit user permissions with three-layer controls: deployment scope (infrastructure boundaries), action scope (operation restrictions), and autonomy level (human-in-loop requirements). Exposing the full reasoning and planning chain before execution creates audit trails while building operator confidence. Their organizational shift from centralized AI teams to an "AI mesh" structure pushes domain ownership to business units while maintaining unified data architecture, enabling agent systems that can leverage diverse data sources across operational, support, supply chain, and contract domains. Topics discussed: ARC framework categorizing use cases by Acceleration, Replacement, and Creation to focus resources on differentiation Three-dimension agent governance: deployment scope, action scope, and autonomy levels with inherited user permissions Exposing agent reasoning, planning, and execution chains for production transparency and audit requirements AI mesh organizational model distributing domain ownership while maintaining centralized data architecture Pre-production SME validation versus post-deployment behavioral analytics for accuracy measurement 90% reduction in time-to-knowledge through RAG systems accessing tens of thousands of documentation pages Build versus buy decisions anchored to competitive differentiation and willingness to rebuild every six months Strategic data architecture enabling cross-domain agent capabilities combining operational, support, and business data Agent interoperability protocols including MCP and A2A for cross-enterprise collaboration Production metrics tracking user rephrasing patterns, sentiment analysis, and intent understanding for accuracy

  • The Chief AI Officer Show

    Edge AI Foundation's Pete Bernard on an Edge-First Framework: Eliminate Cloud Tax Running AI On Site

    16/10/2025 | 43 mins.

    Pete Bernard, CEO of Edge AI Foundation, breaks down why enterprises should default to running AI at the edge rather than the cloud, citing real deployments where QSR systems count parking lot cars to auto-trigger french fry production and medical implants that autonomously adjust deep brain stimulation for Parkinson's patients. He shares contrarian views on IoT's past failures and how they shaped today's cloud-native approach to managing edge devices. Topics discussed: Edge-first architectural decision framework: Run AI where data is created to eliminate cloud costs (ingress, egress, connectivity, latency) Market growth projections reaching $80 billion annually by 2030 for edge AI deployments across industries Hardware constraints driving deployment decisions: fanless systems for dusty environments, intrinsically safe devices for hazardous locations Self-tuning deep brain stimulation implants measuring electrical signals and adjusting treatment autonomously, powered for decades without external intervention Why Bernard considers Amazon Alexa "the single worst thing to ever happen to IoT" for creating widespread skepticism Solar-powered edge cameras reducing pedestrian fatalities in San Jose and Colorado without infrastructure teardown Generative AI interpreting sensor fusion data, enabling natural language queries of hospital telemetry and industrial equipment health

  • The Chief AI Officer Show

    PATH's Bilal Mateen on the measurement problem stalling healthcare AI

    02/10/2025 | 37 mins.

    PATH's Chief AI Officer Bilal Mateen reveals how a computer vision tool that digitizes lab documents cut processing time from 90 days to 1 day in Kenya, yet vendors keep pitching clinical decision support systems instead of these operational solutions that actually move the needle. After 30 years between FDA approval of breast cancer AI diagnostics and the first randomized control trial proving patient benefit, Mateen argues we've been measuring the wrong things: diagnostic accuracy instead of downstream health outcomes. His team's Kenya pilot with Penda Health demonstrated cash-releasing ROI through an LLM co-pilot that prevented inappropriate prescriptions, saving patients and insurers $50,000 in unnecessary antibiotics and steroids. What looks like lost revenue to the clinic represents system-wide healthcare savings. Topics discussed: The 90-day to 1-day document digitization transformation in Kenya Research showing only 1 in 20 improved diagnostic tests benefit patients Cash-releasing versus non-cash-releasing efficiency gains framework The 30-year gap between FDA approval and proven patient outcomes Why digital infrastructure investment beats diagnostic AI development Hidden costs of scaling pilots across entire health systems How inappropriate prescription prevention creates system-wide savings Why operational AI beats clinical decision support in resource-constrained settings

More Technology podcasts

About The Chief AI Officer Show

The Chief AI Officer Show bridges the gap between enterprise buyers and AI innovators. Through candid conversations with leading Chief AI Officers and startup founders, we unpack the real stories behind AI deployment and sales. Get practical insights from those pioneering AI adoption and building tomorrow’s breakthrough solutions.
Podcast website

Listen to The Chief AI Officer Show, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Chief AI Officer Show: Podcasts in Family

Social
v8.2.2 | © 2007-2026 radio.de GmbH
Generated: 1/13/2026 - 5:36:30 AM