For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist Jo...
2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57
What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come.
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:
$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km
Anthropic Alignment Faking Video:https://www.youtube.com/watch?v=9eXV64O2Xp8&t=1s Neil DeGrasse Tyson Video: https://www.youtube.com/watch?v=JRQDc55Aido&t=579s
Max Winga's Amazing Speech:https://www.youtube.com/watch?v=kDcPW5WtD58
Get Involved!
EMAIL JOHN: [email protected]
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/about
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
https://lethalintelligence.ai
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
--------
1:40:10
AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:
$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S...
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change.
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/about
EMAIL JOHN: [email protected]
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
https://lethalintelligence.ai
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
--------
1:14:21
AI Risk Special | "Near Midnight in Suicide City" | Episode #55
In a special episode of For Humanity: An AI Risk Podcast, host John Sherman travels to San Francisco. Episode #55 "Near Midnight in Suicide City" is a set of short pieces from our trip out west, where we met with Pause AI, Stop AI, Liron Shapira and stopped by Open AI among other events.
Big, huge massive thanks to Beau Kershaw, Director of Photography, and my biz partner and best friend who made this journey with me through the work side and the emotional side of this. The work is beautiful and the days were wet and long and heavy. Thank you, Beau.
SUPPORT PAUSE AI: https://pauseai.info/
SUPPORT STOP AI: https://www.stopai.info/about
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
EMAIL JOHN: [email protected]
Check out our partner channel: Lethal Intelligence AI
Lethal Intelligence AI - Home
https://lethalintelligence.ai
@lethal-intelligence-clips
/ @lethal-intelligence-clips
--------
1:31:34
Connor Leahy Interview | Helping People Understand AI Risk | Episode #54
3,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture.
(FULL INTERVIEW STARTS AT 00:06:46)
DONATION SUBSCRIPTION LINKS:
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
EMAIL JOHN: [email protected]
Check out Lethal Intelligence AI:
Lethal Intelligence AI - Home
https://lethalintelligence.ai
@lethal-intelligence-clips
/ @lethal-intelligence-clips
--------
2:24:58
Human Augmentation Incoming | The Coming Age Of Humachines | Episode #53
In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI.
DONATION SUBSCRIPTION LINKS:
$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...
$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...
$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Listen to For Humanity: An AI Safety Podcast, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app