PodcastsBusinessThe Road to Accountable AI

The Road to Accountable AI

Kevin Werbach
The Road to Accountable AI
Latest episode

65 episodes

  • The Road to Accountable AI

    Katie Fowler (Thompson Reuters Foundation): How 3,000 Companies Approach AI Governance

    30/04/2026 | 37 mins.
    Good data about how companies are implementing AI governance programs is essential both for organizations to benchmark their efforts, and for observers to understand the state of development. In this episode, Katie Fowler, Director of Responsible Business at the Thomson Reuters Foundation, joins Kevin Werbach to discuss the findings of Responsible AI in Practice, a new report drawing on a global dataset of roughly 3,000 companies across 13 sectors.

    Fowler unpacks the report's central finding: an enormous gap between corporate AI ambition and operational governance, with 44 percent of companies reporting an AI strategy but only 13 percent publicly committing to a formal governance framework. She argues that the gap is structural rather than just a disclosure failure, noting that AI expertise often sits deep within technical teams rather than at the leadership levels responsible for organization-wide rollout. She points to striking regional variation in workforce protections, the EU AI Act's emergence as a de facto global reference framework even outside Europe, and pushes back on the narrative that regulation stifles innovation. Looking forward, she discusses how investors are using transparency as a proxy for risk management in the absence of mature responsible AI metrics, and outlines the long-term vision of building a dataset robust enough to support a responsible AI index tied to financial materiality.
    Katie Fowler is Director of Responsible Business at the Thomson Reuters Foundation, the independent charity affiliated with Thomson Reuters. She leads initiatives including the Workforce Disclosure Initiative (a global platform collecting survey data on how companies treat workers across their direct operations and supply chains) and the AI Company Data Initiative, launched in partnership with UNESCO. Before joining the Foundation, Fowler held leadership roles at The Social Innovation Partnership and Chance for Childhood. 
    Transcript


    Responsible AI in Practice: 2025 Global Insights from the AI Company Data Initiative
    Why a Companywide Effort Is Key to Responsible and Trustworthy AI Adoption (Katie Fowler, techUK guest blog, 2025)
  • The Road to Accountable AI

    Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

    23/04/2026 | 38 mins.
    AI-generated deepfakes are exploding in volume and quality, posing frightening challenges for public discourse, security, safety, and more. My guest, Henry Ajder, has been mapping the deepfake landscape since before most people had heard the term. In this conversation, he describes the dramatic changes in realism, efficiency, accessibility, and functionality of synthetic media tools since he published the first comprehensive census of deepfakes in 2019. Ajder describes the current moment as one of "epistemic nihilism," where people cannot reliably distinguish real from synthetic content and the available technological responses are not yet at a level of categorical trust. He introduces a framework of "deception, doubt, and degradation" for understanding deepfake harms, and draws a distinction between the clearly malicious, the clearly beneficial, and a vast unsettling middle ground of uses that society has not yet figured out how to evaluate.
    On the response side, Ajder warns that media literacy advice is not just outdated but actively harmful, because it gives people false confidence in their ability to spot fakes. Detection tools, watermarking, and content provenance standards like C2PA, while valuable, each have real limitations. Ajder's practical advice for organizations centers on red-teaming, understanding what your tool is actually for and who it serves, and recognizing that authenticity is a strategic asset in a synthetic age.
    Henry Ajder is the founder of Latent Space Advisory and one of the world's foremost experts on deepfakes and generative AI. He authored the landmark 2019 State of Deepfakes report, and has since advised organizations including Meta, Adobe, the UK Government, the EU Commission, the US FTC, and the World Economic Forum. He co-leads the University of Cambridge's Generative AI in Business programme, and sits on Meta's Reality Labs Advisory Council.
    Transcript

    Latent Space Advisory
    The State of Deepfakes: Landscape, Threats, and Impact (2019)
    The Future Will Be Synthesised (BBC Radio 4 Documentary Series, 2022)
  • The Road to Accountable AI

    Phil Dawson, Armilla AI: Insurance for AI Risks

    16/04/2026 | 30 mins.
    Could a private insurance market play a significant role in compensating for AI-related harms and incentivizing companies to engage in more effective AI governance?

    Phil Dawson of Armillla AI explains why AI insurance is emerging as a distinct product category, why traditional policies aren't effective at addressing AI risks, and what AI insurance actually covers. Dawson details Armilla's journey from AI testing platform assurance provider to, managing general agent for AI insurance policies, arguing that the company's AI audit experience gave it the risk data and evaluation capabilities needed to underwrite AI systems. A key turning point, he says, was realizing that as companies received reports showing how their models performed or underperformed, they became more concerned about risk, and insurance emerged as the next logical step to build trust.
    Dawson identifies the absence of claims data as the central challenge for AI underwriting, which forces insurers to rely on proxy signals. He argues that policymakers can help by incentivizing transparency, disclosure, and third-party assessment. Drawing on lessons from cyber insurance, Dawson contends that risk-based pricing must be grounded in system-level governance evaluation. He also describes Armilla's partnership program, which connects insured companies with AI governance platforms, auditing firms, and certification bodies, ultimately driving improved AI governance maturity across the sector.
    Philip Dawson is Head of AI Policy and Partnerships at Armilla AI, an MGA and Lloyd's cover holder that provides dedicated AI insurance products. A lawyer and public policy adviser, he has spent nearly a decade working on AI governance, including early involvement in the drafting of the OECD AI Principles and roles at Element AI, the United Nations, and the Harvard Kennedy School's Carr Center for Human Rights Policy.

    Transcript


    Ready or Not: The Impact of Artifician Intelligence on Insurance Risks (Armilla AI and Lockton, February 2026)
    Armilla AI Raises Lloyd's-Backed Coverage to $25M as Traditional Insurers Retreat from AI Risk (Fintech Finance News, January 22, 2026) 
    Gen AI Risks for Businesses: Exploring the Role for Insurance (Geneva Association, October 2, 2025)
  • The Road to Accountable AI

    Walter Haydock, StackAware: In Search Of AI Governance Certification

    09/04/2026 | 32 mins.
    Walter Haydock draws a direct line from military risk management to the enterprise AI challenge. His argues that organizations need to stop doing "math with colors," and move toward quantitative assessment that assigns dollar values to potential AI failures. Much of the conversation in this episode focuses on ISO 42001, the global standard for AI management systems, which Haydock has championed and which his own firm has gone through. He draws a three-part taxonomy of AI governance frameworks: legislation you either comply with or don't, voluntary self-attestable frameworks like the NIST AI RMF, and externally certifiable standards like ISO 42001 that bring independent verification.

    Haydock outlines a forward-looking vision in which certification, insurance, and legal safe harbors reinforce one another. Machine-readable audit data will eventually allow insurers to make informed underwriting decisions about AI risk, reducing uncertainty for both enterprises and their customers.  Though, as he acknowledges, we are still far from that environment, with AI audits today still roughly 90% manual.
    Walter Haydock is the founder of StackAware, which helps AI-powered companies manage security, compliance, and privacy risk. Before entering the private sector, he served as a reconnaissance and intelligence officer in the U.S. Marine Corps, as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, and as an analyst at the National Counterterrorism Center. He is a graduate of the United States Naval Academy, Georgetown University's School of Foreign Service, and Harvard Business School.
    Transcript


    Deploy Securely (Haydock's Substack)
  • The Road to Accountable AI

    Richa Kaul, Complyance: Asking the Right Questions

    02/04/2026 | 33 mins.
    Richa Kaul breaks down the AI risk landscape for enterprises, and argues that the key to managing all of them is resisting the urge to sensationalize. Kaul offers a candid assessment of where enterprise AI governance committees are falling short, noting that many  lack the technical fluency to ask vendors the right questions, such as where customer data goes, whether it trains other clients' models, and what specific steps reduce hallucination. She suggests that market-driven security standards like SOC-2 and ISO 27001 often matter more in practice than government regulation, creating a "beautiful ecosystem" where risk management runs ahead of the law. Looking forward, she addresses the growing challenge of agentic AI systems that make decisions autonomously, offering a deceptively simple prescription: Map every action an agent can take, know where your highest risk sits, identify the critical decision points, and demand human sign-off at each one/
    Richa Kaul is the founder and CEO of Complyance, an AI-native enterprise governance, risk, and compliance (GRC) platform. Before founding Complyance, she was Chief Strategy Officer at ContractPodAi, a legal technology company, and previously served as Managing Director at the Virginia Economic Development Partnership and as a management consultant at McKinsey.
    Transcript


    Complyance Raises $20M to Help Companies Manage Risk and Compliance (TechCrunch, February 11, 2026)

More Business podcasts

About The Road to Accountable AI

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
Podcast website

Listen to The Road to Accountable AI, James Reed: all about business and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 5/1/2026 - 5:05:37 PM