PodcastsTechnologyThe Chief AI Officer Show

The Chief AI Officer Show

Front Lines
The Chief AI Officer Show
Latest episode

41 episodes

  • The Chief AI Officer Show

    Why AI won't save media without fixing the infrastructure underneath

    09/04/2026 | 48 mins.
    What happens when a journalist turned Amazon product manager becomes the Chief AI Officer of one of the world's largest international broadcasters? You get someone who sees the AI threat to media not just as a distribution problem, but as a full production chain crisis that requires a fundamentally different organizational architecture.
    Marie Kilg, Chief AI Officer at Deutsche Welle, makes the case that legacy media's survival depends on something most AI transformation conversations ignore: data interoperability across systems that were never designed to talk to each other. With 32 languages, siloed editorial teams, and decades of layered organizational structure, Deutsche Welle's path to an AI-powered content flywheel starts at the infrastructure layer, not the model layer.
    Topics Discussed:
    Why AI threatens the full media production chain, not just distribution

    The flywheel model: feeding audience data back into editorial decisions

    Data interoperability as the core prerequisite for AI at scale in media

    Why "push a button and AI does it" expectations are damaging real implementation

    How metadata automation surfaces hidden infrastructure debt

    Organizational change mechanisms vs. culture change in large public broadcasters

    Tech companies underestimating journalism as a discipline
  • The Chief AI Officer Show

    AI Won't Break Your Security Program. Your Gaps Will.

    26/03/2026 | 45 mins.
    Most security leaders treat AI as a new threat category requiring new defenses. Rohit Parchuri, SVP and Chief Information Security Officer at Yext, pushes back hard on that. His argument: if your foundational controls are solid, AI does not require you to rebuild anything. What it does is amplify whatever you already have, gaps included, which makes the real question not "what new controls do we need?" but "how well are we actually executing on what we already built?"
    Rohit walks host Ben Gibert through how Yext is operationalizing this at scale: threat-modeling AI as just another system with inputs, processing, and outputs; building AI security testing directly into the existing CI/CD pipeline rather than standing it up as a separate track; investing heavily in data classification and taxonomy to solve DLP before deploying any AI tool internally; and establishing an AI Excellence Committee with cross-functional representation to run a single governance funnel across every AI request in the company. He also makes the case that the CISO who earns a seat at the AI strategy table is the one who deeply understands the business value chain, not just the threat landscape.
    Topics discussed:
    Threat-modeling AI as a system instead of a threat category

    Why existing security controls are sufficient for AI today

    Integrating AI security testing into CI/CD without adding process overhead

    Data classification and taxonomy as prerequisites for safe internal AI adoption

    Using an AI Bill of Materials as a transparency mechanism

    How Yext's AI Excellence Committee runs a single governance funnel

    Build vs. buy decision-making for AI security tooling

    What separates strategic CISOs from tactical operators in the age of AI

    The CISO's role in enabling AI adoption rather than blocking it
  • The Chief AI Officer Show

    Building AI agents that fix production incidents before engineers wake up

    12/03/2026 | 42 mins.
    Diamond Bishop has spent 15 years building AI systems at Microsoft (Cortana), Amazon (Alexa), and Facebook (PyTorch) before founding an AI DevOps startup that Datadog acquired. Now running Datadog's AI Skunk Works, a deliberately small interdisciplinary team modeled on Lockheed's original, he's focused on a question most enterprise AI teams aren't asking yet: what does your product look like if humans are no longer the primary customer?
    That question drives everything from Bits AI, their production SRE and security agent, to a set of longer-range bets organized around three pillars: personalized agent learning, enterprise agent infrastructure, and eval. Diamond breaks down how he structures each one, why the demo-to-production gap comes down to data and eval rather than model capability, and where the real unsolved problems in agent development still sit.
    Topics discussed:
    Bits AI's capabilities in production across SRE incident response, security analysis and code generation

    Three-pillar agent development framework: personalized learning, enterprise infrastructure and eval

    LoRA-style adapter architecture for layering custom per-user agents on top of first-party agents

    Why SRE agent startups without proprietary observability data face a structural disadvantage at production scale

    Service graph and entity relationship context as a structured alternative to RAG for DevOps agents

    Skunk Works team design: staying small and interdisciplinary to move like a startup inside a public company

    The shift from human-operated cloud services to ambient AI-native services built to run with fewer humans over time

    Crawl-walk-run path for enterprise agent adoption: from LangGraph-based Python agents to continuously learning systems

    Why concentrating AI research investment in transformer scaling creates long-term architectural risk

    Building agent-native tooling rather than repurposing interfaces designed for humans
  • The Chief AI Officer Show

    How Xoriant ties compensation to AI metrics: The revenue, margin, and brand multiple framework

    26/02/2026 | 46 mins.
    Most enterprise AI initiatives die in pilot purgatory because organizations chase peripheral use cases instead of embedding AI into core business processes. Vineet Moroney, Chief Transformation Officer at Xoriant, a 6,000-person engineering services firm, has built a measurement system that eliminates this problem: tie AI directly to three financial metrics (revenue, margin, brand multiple) and make 50% of performance bonuses dependent on them.
    His framework separates AI revenue into two categories: "with AI" (AI-led service transformation like platform modernization) and "for AI" (building AI capabilities on customer platforms). AI margin captures efficiency gains from tool usage that improve project delivery economics. AI multiple quantifies brand value and downstream revenue from innovative deployments. This structure forces teams to distinguish between projects that matter and expensive experiments.
    When Xoriant's CFO wanted to reduce Days Sales Outstanding, Vineet built an invoice payment prediction model at 87% accuracy that eliminated a five-person AR team and cut DSO by two days. The solution required no expensive models, just strategic business case selection. For manufacturing clients, he's deploying edge AI on legacy sensor infrastructure for predictive maintenance without sensor replacement, creating new service revenue streams from installed equipment bases.
    Topics discussed:
    Three-part AI revenue model distinguishing "with AI" service transformation from "for AI" capability building on customer platforms
    Compensation structure allocating 50% of performance bonuses across AI revenue generation, margin improvement, and brand multiple
    The EXB framework quantifying AI returns through efficiency gains, experience improvements via customer lifetime value, and business impact from downstream revenue
    Two-week POC to 90-day production methodology with AI assurance testing protocols for non-deterministic system validation
    Five prerequisite elements for POC survival: strategic alignment, C-suite sponsorship, urgent business need, allocated budget, and core process focus
    Edge AI monetization on legacy sensor infrastructure for predictive maintenance and service offering creation without hardware replacement
    Invoice payment prediction at 87% accuracy reducing five-person AR teams to single-person operations while cutting DSO by two days
    Why golden dataset POCs fail at scale due to latency, inconsistency, and infrastructure readiness gaps
    Sales approach for skeptical executives: lead with customer pain points, prove with similar completed work, commit to rapid production timelines
    Middle management resistance as the primary adoption barrier despite CEO enthusiasm and junior staff willingness to adopt AI tools
  • The Chief AI Officer Show

    The infrastructure mistake that kills AI pilots: Why sandboxes can't reach enterprise data centers

    12/02/2026 | 43 mins.
    Lenovo cut parts planning from six hours to 90 seconds by treating infrastructure architecture as a first-class constraint, not an afterthought. Linda Yao, VP and GM of Hybrid Cloud and AI Solutions, has deployed AI across manufacturing, healthcare diagnostics, and enterprise operations. Her core thesis: most organizations fail at scale not because of use cases or data quality, but because they architect pilots in sandboxes that can't translate to production enterprise data centers.
    Through Lenovo's internal deployments and customer implementations, Yao has built a systematic approach to moving past experimentation. Her team developed what they call an AI library of battle tested use cases with proven deployment architectures, from computer vision systems that augment special education therapists to diagnostic tools preventing blindness in underserved regions. The methodology centers on a critical insight: ongoing monitoring and model management represents the capability gap causing implementations to plateau after initial deployment.
    Topics discussed:
    Five-stage methodology where ongoing monitoring of drift, model updates, and agent evolution separates successful deployments from stalled pilots

    Infrastructure architecture coherence requirement between pilot and production environments to enable actual scaling

    Enterprise planning agents orchestrating across personal wellness, workload management, and digital employee experience using full device stack ownership

    AI factory model for rapid diagnostic tool development and field distribution in resource constrained healthcare settings

    Hybrid deployment trend reversing decade long cloud first mentality due to data governance and compliance requirements

    Four pillar readiness assessment covering security, data quality, people capability, and technology infrastructure before deployment

    Build leverage partner philosophy for full stack integration with pre tested component validation and reference architectures

    Liquid cooling technology deployment addressing GPU energy consumption and data center sustainability constraints at scale

More Technology podcasts

About The Chief AI Officer Show

The Chief AI Officer Show bridges the gap between enterprise buyers and AI innovators. Through candid conversations with leading Chief AI Officers and startup founders, we unpack the real stories behind AI deployment and sales. Get practical insights from those pioneering AI adoption and building tomorrow’s breakthrough solutions.
Podcast website

Listen to The Chief AI Officer Show, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Chief AI Officer Show: Podcasts in Family

Social
v8.8.7| © 2007-2026 radio.de GmbH
Generated: 4/10/2026 - 12:06:58 AM