PodcastsTechnologyThe Chief AI Officer Show

The Chief AI Officer Show

Front Lines
The Chief AI Officer Show
Latest episode

36 episodes

  • The Chief AI Officer Show

    How incident.io built AI agents that draft code fixes within 3 minutes of an alert

    29/1/2026 | 44 mins.
    Lawrence Jones, product engineer at incident.io, describes how their AI incident response system evolved from basic log summaries to agents that analyze thousands of GitHub PRs and Slack messages to draft remediation pull requests within three minutes of an alert firing. The system doesn't pursue full automation because the real value lies elsewhere: eliminating the diagnostic work that consumes the first 30-60 minutes of incident response, and filtering out the false positives that wake engineers unnecessarily at 3am.
    The core architectural decision treats each organization's incident history as a unique immune system rather than fitting generic playbooks. By pre-processing and indexing how a specific company has resolved incidents across dimensions like affected teams, error patterns, and system dependencies, incident.io generates ephemeral runbooks that surface the 3-4 commands that actually worked last time this type of failure occurred. This approach emerged from recognizing that cross-customer meta-models fail because incident response is fundamentally organization-specific: one company's SEV-0 is an airline bankruptcy, another's is a stolen laptop.
    The engineering challenge centers on building trust with deeply skeptical SRE teams who view AI as non-deterministic chaos in their deterministic infrastructure. Lawrence's team addresses this through custom Go tooling that enables backtest-driven development: they rerun thousands of historical investigations with different model configurations and prompt changes, then use precision-focused scorecards to prove improvements objectively before deploying. This workflow revealed that traditional product engineers struggle with AI's slow evaluation cycles, while the team succeeded by hiring for methodical ownership over velocity.
    Topics discussed:
    Balancing precision versus recall in agent outputs to earn trust from SRE teams who are "hardcore AI holdouts"

    Pre-processing incident artifacts (PRs, Slack threads, transcripts) into queryable indexes that cross-reference team ownership, system dependencies, and historical resolution patterns

    Model selection strategy: GPT-4.1 for cost-effective daily operations, Claude Sonnet for superior code analysis and agentic planning loops

    Backtest infrastructure that reruns thousands of past investigations with modified prompts to objectively validate changes through scorecard comparisons

    Building ephemeral runbooks by extracting which historical commands and fixes worked for similar incidents, filtered by what the organization learned NOT to do in subsequent incidents

    Prioritizing alert noise reduction over autonomous remediation because the false positive problem has clearer ROI and lower risk

    Why AI engineering teams fail when staffed with traditional engineers optimized for fast feedback loops rather than tolerance for non-deterministic iteration

    Building entirely custom tooling in Go without vendor frameworks due to early ecosystem constraints and desire for native product integration

    The evaluation problem where only engineers who invested hundreds of hours building a system can predict how prompt changes cascade through multi-step agentic workflows
  • The Chief AI Officer Show

    Building AI agents for infrastructure where one mistake makes Wall Street Journal headlines

    16/1/2026 | 47 mins.
    Alexander Page transitioned from sales engineer to engineering director by prototyping LLM applications after ChatGPT's launch, moving from initial prototype to customer GA in under four months. At Big Panda, he's building Biggie, an AIOps co-pilot where reliability isn't negotiable: a wrong automation execution at a major bank could make headlines.

    Big Panda's core platform correlates alerts from 10-50 monitoring tools per customer into unified incidents. Biggie operates at L2/L3 escalation: investigating root causes through live system queries, surfacing remediation options from Ansible playbooks, and managing incident workflows. The architecture challenge is building agents that traverse ServiceNow, Dynatrace, New Relic, and other APIs while maintaining human approval gates for any write operations in production environments.

    Page's team invested months building a dedicated multi-agent system (15-20 steps with nested agent teams) solely for knowledge graph operations. The insertion pipeline transforms unstructured data like Slack threads, call transcripts, and technical PDFs with images into graph representations, validating against existing state before committing changes. This architectural discipline makes retrieval straightforward and enables users to correct outdated context directly, updating graph relationships in real-time. Where vector search finds similar past incidents, the knowledge graph traces server dependencies to surface common root causes across connected infrastructure.

    Topics discussed:

    Moving LLM prototypes to production in months during GPT-3.5 era by focusing on customer design partnerships

    Evaluating agentic systems by validating execution paths rather than response outputs in non-deterministic environments

    Building tool-specific agents for monitoring platforms lacking native MCP implementations

    Architecting multi-agent knowledge graph insertion systems that validate state before write operations

    Implementing approval workflows for automation execution in high-consequence infrastructure environments

    Designing RAG retrieval using fusion techniques, hypothetical document embeddings, and re-representation at indexing

    Scaling design partnerships as extended product development without losing broader market applicability

    Separating read-only investigation agents from write-capable automation agents based on failure consequence modeling
  • The Chief AI Officer Show

    ACC’s Dr. Ami Bhatt: AI Pilots Fail Without Implementation Planning

    18/12/2025 | 44 mins.
    Dr. Ami Bhatt's team at the American College of Cardiology found that most FDA-approved cardiovascular AI tools sit unused within three years. The barrier isn't regulatory approval or technical accuracy. It's implementation infrastructure. Without deployment workflows, communication campaigns, and technical integration planning, even validated tools fail at scale.

    Bhatt distinguishes "collaborative intelligence" from "augmented intelligence" because collaboration acknowledges that physicians must co-design algorithms, determine deployment contexts, and iterate on outputs that won't be 100% correct. Augmentation falsely suggests AI works flawlessly out of the box, setting unrealistic expectations that kill adoption when tools underperform in production.

    Her risk stratification approach prioritizes low-risk patients with high population impact over complex diagnostics. Newly diagnosed hypertension patients (affecting 1 in 2 people, 60% undiagnosed) are clinically low-risk today but drive massive long-term costs if untreated. These populations deliver better ROI than edge cases but require moving from episodic hospital care to continuous monitoring infrastructure that most health systems lack.

    Topics discussed:

    Risk stratification methodology prioritizing low-risk, high-impact patient populations

    Infrastructure gaps between FDA approval and scaled deployment

    Real-world evidence approaches for AI validation in lower-risk categories

    Synthetic data sets from cardiovascular registries for external company testing

    Administrative workflow automation through voice-to-text and prior authorization tools

    Apple Watch data integration protocols solving wearable ingestion problems

    Three-part startup evaluation: domain expertise, technical iteration capacity, implementation planning

    Real-time triage systems reordering diagnostic queues by urgency
  • The Chief AI Officer Show

    Usertesting's Michael Domanic: Hallucination Fears Mean You're Building Assistants, Not Thought Partners

    04/12/2025 | 39 mins.
    UserTesting deployed 700+ custom GPTs across 800 employees, but Michael Domanic's core insight cuts against conventional wisdom: organizations fixated on hallucination risks are solving the wrong problem. That concern reveals they're building assistants for summarization when transformational value lives in using AI as strategic thought partner. This reframe shifts evaluation criteria entirely.

    Michael connects today's moment to 2015's Facebook Messenger bot collapse, when Wit.ai integration promised conversational commerce that fell flat. The inversion matters: that cycle failed because NLP couldn't meet expectations shaped by decades of sci-fi. Today foundation models outpace organizational capacity to deploy responsibly, creating an obligation to guide employees through transformation rather than just chase efficiency.

    His vendor evaluation cuts through conference floor noise. When teams pitch solutions, first question: can we build this with a custom GPT in 20 minutes? Most pitches are wrappers that don't justify $40K spend. For legitimate orchestration needs, security standards and low-code accessibility matter more than demos.

    Topics discussed:

    Using AI as thought partner for strategic problem-solving versus summarization and content generation tasks

    Deploying custom GPTs at scale through OKR-building tools that demonstrated broad organizational application

    Treating conscientious objectors as essential partners in responsible deployment rather than adoption blockers

    Filtering vendor pitches by testing whether custom GPT builds deliver equivalent functionality first

    Prioritizing previously impossible work over operational efficiency when setting transformation strategy

    Building agent chains for customer churn signal monitoring while maintaining human decision authority

    Implementing security-first evaluation for enterprise orchestration platforms with low-code requirements

    Creating automated AI news digests using agent workflows and Notebook LM audio synthesis
  • The Chief AI Officer Show

    Christian Napier On Government AI Deployment: Why Productivity Tools Worked But Chatbots Didn't

    20/11/2025 | 45 mins.
    Utah's tax chatbot pilot exposed the non-deterministic problem every enterprise faces: initial LLM accuracy hit 65-70% when judged by expert panels, with another 20-25% partially correct. After months of iteration, three of four vendors delivered strong enough results for Utah to make a vendor selection and begin production deployment. Christian Napier, Director of AI for Utah's Division of Technology Services, explains why the gap between proof of concept and production is where AI budgets and timelines collapse.
    His team deployed Gemini across state agencies with over 9,000 active users collectively saving nearly 12,000 hours per week. Meanwhile, agency-specific knowledge chatbots struggle with optional adoption, competing against decades of human expertise.
    The bigger constraint isn't technical. Vendor quotes for the same citizen-facing solution dropped from eight figures to five during negotiations as pricing models shifted. When procurement cycles run 18 months and foundation models deprecate quarterly, traditional budgeting breaks.
    Topics discussed:
    Expert panel evaluation methodology for testing LLM accuracy in regulated tax advice scenarios

    Low-code AI platforms reaching capability limits on complex use cases requiring pro-code solutions

    Avoiding $5 million in potential annual licensing costs through Google Workspace AI integration timing

    Tracking self-reported productivity gains of 12,000 hours weekly across 9,000 active users

    AI Factory process requiring privacy impact assessments and security reviews before any pilots

    Vendor pricing dropping from eight-figure to five-figure quotes as commercial models evolved

    Forcing adoption through infrastructure replacement when legacy HR platform went read-only

    Separating automation opportunities from optional tools competing with existing workflows

    Digital identity requirements for future agent-to-government transactions and authorization

    Regulatory relief exploration for AI applications in licensed professions like mental health

More Technology podcasts

About The Chief AI Officer Show

The Chief AI Officer Show bridges the gap between enterprise buyers and AI innovators. Through candid conversations with leading Chief AI Officers and startup founders, we unpack the real stories behind AI deployment and sales. Get practical insights from those pioneering AI adoption and building tomorrow’s breakthrough solutions.
Podcast website

Listen to The Chief AI Officer Show, The Vergecast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Chief AI Officer Show: Podcasts in Family

Social
v8.4.0 | © 2007-2026 radio.de GmbH
Generated: 2/3/2026 - 7:40:06 AM