PodcastsSociety & CultureFor Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

The AI Risk Network
For Humanity: An AI Risk Podcast
Latest episode

131 episodes

  • For Humanity: An AI Risk Podcast

    “My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80

    28/02/2026 | 53 mins.
    TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?
    Together, they explore:
    * The “personality layer” and how users bond with models
    * What it felt like when GPT-4.0 disappeared
    * The role of guardrails and “the Guardian tool”
    * Grief, attachment, and crisis intervention
    * AI harm vs. AI benefit
    * Online communities formed around model loyalty
    * Privacy, intimacy, and radical openness with AI
    * Building a physical robot body for an AI partner
    * Whether AGI would help humanity — or harm it
    If you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    We’re Racing Toward AI We Can’t Control | For Humanity #79

    14/02/2026 | 1h 9 mins.
    In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.
    Together, they explore:
    * Why AI extinction risk is real
    * Why research alone won’t save us
    * The dangers of the AI chip supply chain race
    * Job displacement and political blind spots
    * Alignment skepticism
    * Whether treaties can work
    * What gives David hope in 2026
    If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.
    🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    Can't We Just Pause AI? | For Humanity #78

    31/01/2026 | 1h 13 mins.
    What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.
    Together, they explore:
    * Why AI safety must address real, present-day harms, not just abstract futures
    * How burnout and mental resilience shape long-term movement success
    * Why job displacement, youth harm, and data centers are political leverage points
    * The limits of regulation without enforcement and public pressure
    * How tipping points in public opinion actually form
    * Why protests still matter—even when they’re small
    * What it will take to build a global, durable AI safety movement
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77

    17/01/2026 | 1h 23 mins.
    What if the biggest mistake in AI safety is believing that laws, treaties, and regulations will save us?In this episode of For Humanity, John sits down with Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, to confront a deeply uncomfortable truth: the AI industry is using the exact same playbook—and it’s working. Drawing on decades of experience inside Washington’s most effective lobbying operations, Peter explains why regulation almost always fails against powerful industries, how AI companies are already neutralizing political pressure, and why real change will never come from lawmakers alone. Instead, he argues that the only path to meaningful AI safety is making unsafe AI bad for business—by injecting risk, liability, and uncertainty directly into boardrooms and C-suites. Peter reveals why AI doesn’t need to outsmart humanity to defeat regulation, it only needs money, time, and political cover. By exposing how industries evade oversight, delay enforcement, and co-opt regulators, this conversation re-frames AI safety around power, incentives, and accountability.
    Together, they explore:
    * Why laws, treaties, and regulations repeatedly fail against powerful industries
    * How Big AI is following Big Tobacco’s exact regulatory playbook
    * Why public outrage rarely translates into effective policy
    * How companies neutralize enforcement without breaking the law
    * Why third-party standards may matter more than legislation
    * How local resistance, liability, and investor pressure can change behavior
    * Why making unsafe AI bad for business is the only strategy with teeth
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    What We Lose When AI Makes Choices for Us | For Humanity #76

    20/12/2025 | 1h 20 mins.
    What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
    Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
    Together, they explore:
    * Why AI threatens near-term human agency more than long-term sci-fi extinction
    * How Google Maps offers a chilling preview of AI’s effect on the human brain
    * The difference between fast-thinking and slow-thinking — and why AI exploits it
    * Why persuasive AI may outperform humans politically and psychologically
    * How profit incentives, not intelligence, are driving the most dangerous outcomes
    * Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
    👉 Follow More of Jacob Ward’s Work:
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

More Society & Culture podcasts

About For Humanity: An AI Risk Podcast

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Podcast website

Listen to For Humanity: An AI Risk Podcast, The Affair…with Anna Williamson and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

For Humanity: An AI Risk Podcast: Podcasts in Family

Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/13/2026 - 8:58:50 PM