Podcast episode for The Big Nonprofits Post 2025.* 00:00:00 - Introduction* 00:01:46 - Table of Contents* 00:08:23 - A Word of Warning* 00:09:40 - A Note To Charities* 00:10:44 - Use Your Personal Theory of Impact* 00:12:34 - Use Your Local Knowledge* 00:13:39 - Unconditional Grants to Worthy Individuals Are Great* 00:16:11 - Do Not Think Only On the Margin, and Also Use Decision Theory* 00:17:15 - Compare Notes With Those Individuals You Trust* 00:17:47 - Beware Becoming a Fundraising Target* 00:18:13 - And the Nominees Are* 00:22:03 - Organizations that Are Literally Me* 00:22:15 - Balsa Research* 00:25:08 - Don’t Worry About the Vase* 00:26:41 - Organizations Focusing On AI Non-Technical Research and Education* 00:27:13 - Lightcone Infrastructure* 00:29:49 - The AI Futures Project* 00:31:31 - Effective Institutions Project (EIP) (For Their Flagship Initiatives)* 00:33:13 - Artificial Intelligence Policy Institute (AIPI)* 00:34:50 - AI Lab Watch* 00:35:55 - Palisade Research* 00:37:02 - CivAI* 00:37:50 - AI Safety Info (Robert Miles)* 00:38:31 - Intelligence Rising* 00:39:18 - Convergence Analysis* 00:40:12 - IASEAI (International Association for Safe and Ethical Artificial Intelligence)* 00:40:53 - The AI Whistleblower Initiative* 00:41:33 - Organizations Related To Potentially Pausing AI Or Otherwise Having A Strong International AI Treaty* 00:41:41 - Pause AI and Pause AI Global* 00:43:03 - MIRI* 00:44:19 - Existential Risk Observatory* 00:45:16 - Organizations Focusing Primary On AI Policy and Diplomacy* 00:45:55 - Center for AI Safety and the CAIS Action Fund* 00:47:31 - Foundation for American Innovation (FAI)* 00:50:29 - Encode AI (Formerly Encode Justice)* 00:51:31 - The Future Society* 00:52:23 - Safer AI* 00:52:59 - Institute for AI Policy and Strategy (IAPS)* 00:54:08 - AI Standards Lab (Holtman Research)* 00:55:14 - Safe AI Forum* 00:55:49 - Center For Long Term Resilience* 00:57:33 - Simon Institute for Longterm Governance* 00:58:30 - Legal Advocacy for Safe Science and Technology* 00:59:42 - Institute for Law and AI* 01:00:21 - Macrostrategy Research Institute* 01:00:51 - Secure AI Project* 01:01:29 - Organizations Doing ML Alignment Research* 01:02:49 - Model Evaluation and Threat Research (METR)* 01:04:13 - Alignment Research Center (ARC)* 01:04:51 - Apollo Research* 01:05:43 - Cybersecurity Lab at University of Louisville* 01:06:22 - Timaeus* 01:07:25 - Simplex* 01:07:54 - Far AI* 01:08:28 - Alignment in Complex Systems Research Group* 01:09:10 - Apart Research* 01:10:15 - Transluce* 01:11:21 - Organizations Doing Other Technical Work* 01:11:24 - AI Analysts at RAND* 01:12:17 - Organizations Doing Math, Decision Theory and Agent Foundations* 01:13:39 - Orthogonal* 01:14:28 - Topos Institute* 01:15:24 - Eisenstat Research* 01:16:02 - AFFINE Algorithm Design* 01:16:25 - CORAL (Computational Rational Agents Laboratory)* 01:17:15 - Mathematical Metaphysics Institute* 01:18:21 - Focal at CMU* 01:19:41 - Organizations Doing Cool Other Stuff Including Tech* 01:19:50 - ALLFED* 01:21:33 - Good Ancestor Foundation* 01:22:56 - Charter Cities Institute* 01:23:45 - Carbon Copies for Independent Minds* 01:24:24 - Organizations Focused Primarily on Bio Risk* 01:24:27 - Secure DNA* 01:25:21 - Blueprint Biosecurity* 01:26:06 - Pour Domain* 01:26:53 - ALTER Israel* 01:27:25 - Organizations That Can Advise You Further* 01:28:03 - Effective Institutions Project (EIP) (As A Donation Advisor)* 01:29:08 - Longview Philanthropy* 01:30:44 - Organizations That then Regrant to Fund Other Organizations* 01:32:00 - SFF Itself (!)* 01:33:33 - Manifund* 01:35:33 - AI Risk Mitigation Fund* 01:36:18 - Long Term Future Fund* 01:38:27 - Foresight* 01:39:14 - Centre for Enabling Effective Altruism Learning & Research (CEELAR)* 01:40:08 - Organizations That are Essentially Talent Funnels* 01:42:08 - AI Safety Camp* 01:42:48 - Center for Law and AI Risk* 01:43:52 - Speculative Technologies* 01:44:44 - Talos Network* 01:45:28 - MATS Research* 01:46:12 - Epistea* 01:47:18 - Emergent Ventures* 01:49:02 - AI Safety Cape Town* 01:49:33 - ILINA Program* 01:49:55 - Impact Academy Limited* 01:50:28 - Atlas Computing* 01:51:08 - Principles of Intelligence (Formerly PIBBSS)* 01:52:00 - Tarbell Center* 01:53:15 - Catalyze Impact* 01:54:15 - CeSIA within EffiSciences* 01:55:04 - Stanford Existential Risk Initiative (SERI)* 01:55:49 - Non-Trivial* 01:56:19 - CFAR* 01:57:25 - The Bramble Center* 01:58:20 - Final RemindersThe Don’t Worry About the Vase Podcast is a listener-supported podcast. To receive new posts and support the cost of creation, consider becoming a free or paid subscriber.https://open.substack.com/pub/thezvi/p/the-big-nonprofits-post-2025?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Get full access to DWAtV Podcast at dwatvpodcast.substack.com/subscribe
-------- Â
1:59:20
--------
1:59:20
ChatGPT 5.1 Codex Max
Podcast episdoe for ChatGPT 5.1 Codex Max.* 00:00 - Introduction* 01:09 - The Famous METR Graph* 03:07 - The System Card* 04:04 - Basic Disallowed Content* 05:07 - Sandbox* 06:25 - Mitigations For Harmful Tasks and Prompt Injections* 07:06 - Preparedness Framework* 07:28 - Biological and Chemical* 08:40 - Cybersecurity* 14:24 - AI Self-Improvement* 17:17 - ReactionsThe Don’t Worry About the Vase Podcast is a listener-supported podcast. To receive new posts and support the cost of creation, consider becoming a free or paid subscriber.https://open.substack.com/pub/thezvi/p/chatgpt-51-codex-max?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Get full access to DWAtV Podcast at dwatvpodcast.substack.com/subscribe
-------- Â
18:39
--------
18:39
Gemini 3 Pro Is a Vast Intelligence With No Spine
Podcast episode for Gemini 3 Pro Is a Vast Intelligence With No Spine.* 00:00:00 - Introduction* 00:01:53 - There Is A Catch* 00:03:35 - Andrej Karpathy Cautions Us* 00:04:59 - On Your Marks* 00:17:01 - Defying Gravity* 00:18:32 - The Efficient Market Hypothesis Is False* 00:21:12 - The Product Of A Deranged Imagination* 00:25:44 - Google Employee Hype* 00:29:24 - Matt Shumer Is A Big Fan* 00:30:23 - Roon Eventually Gains Access* 00:30:43 - The Every Vibecheck* 00:32:07 - Positive Reactions* 00:38:09 - Embedding The App* 00:38:26 - The Good, The Bad and The Unwillingness To Be Ugly* 00:42:12 - Genuine People Personalities* 00:43:52 - Game Recognize Game* 00:45:41 - Negative Reactions* 00:49:31 - Code Fails* 00:50:29 - Hallucinations* 00:57:44 - Early Janusworld Reports* 01:02:26 - Where Do We Go From HereThe Don’t Worry About the Vase Podcast is a listener-supported podcast. To receive new posts and support the cost of creation, consider becoming a free or paid subscriber.https://open.substack.com/pub/thezvi/p/gemini-3-pro-is-a-vast-intelligence?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Get full access to DWAtV Podcast at dwatvpodcast.substack.com/subscribe
-------- Â
1:03:57
--------
1:03:57
Gemini 3: Model Card and Safety Framework Report
Podcast episode for Gemini 3: Model Card and Safety Framework Report.* 00:00 - Introduction* 01:31 - Gemini 3 Facts* 02:38 - On Your Marks* 03:54 - Safety Third* 06:05 - Frontier Safety Framework* 06:57 - CBRN* 10:13 - Cybersecurity* 11:51 - Manipulation* 17:42 - Machine Learning R&D* 20:14 - Misalignment* 23:07 - Chain of Thought Legibility* 23:26 - Safety Mitigations* 26:02 - They Close On This Not Troubling At All Note* 27:03 - So, Is It Safe?The Don’t Worry About the Vase Podcast is a listener-supported podcast. To receive new posts and support the cost of creation, consider becoming a free or paid subscriber.https://open.substack.com/pub/thezvi/p/gemini-3-model-card-and-safety-framework?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Get full access to DWAtV Podcast at dwatvpodcast.substack.com/subscribe
-------- Â
29:33
--------
29:33
AI #143: Everything, Everywhere, All At Once
Podcast episode for AI #143: Everything, Everywhere, All At Once.* 00:00:00 - Introduction* 00:01:47 - Table of Contents* 00:06:02 - Language Models Offer Mundane Utility* 00:06:58 - Tool, Mind and Weapon* 00:11:16 - Choose Your Fighter* 00:11:39 - Language Models Don’t Offer Mundane Utility* 00:16:07 - First Things First* 00:16:44 - Grok 4.1* 00:20:37 - Misaligned?* 00:24:10 - Codex Of Ultimate Coding* 00:26:41 - Huh, Upgrades* 00:27:07 - On Your Marks* 00:28:15 - Paper Tigers* 00:33:13 - Overcoming Bias* 00:38:40 - Deepfaketown and Botpocalypse Soon* 00:39:15 - Fun With Media Generation* 00:41:28 - A Young Lady’s Illustrated Primer* 00:46:40 - They Took Our Jobs* 00:53:01 - On Not Writing* 00:53:27 - Get Involved* 00:54:20 - Introducing* 00:57:09 - In Other AI News* 01:01:22 - Anthropic Completes The Trifecta* 01:03:07 - We Must Protect This House* 01:08:14 - AI Spy Versus AI Spy* 01:14:32 - Show Me the Money* 01:17:30 - Bubble, Bubble, Toil and Trouble* 01:21:13 - Quiet Speculations* 01:22:15 - The Amazing Race* 01:27:39 - Of Course You Realize This Means War (One)* 01:31:33 - The Quest for Sane Regulations* 01:34:10 - Chip City* 01:35:03 - Of Course You Realize This Means War (Two)* 01:40:50 - Samuel Hammond on Preemption* 01:47:33 - Of Course You Realize This Means War (Three)* 01:55:53 - The Week in Audio* 01:56:51 - It Takes A Village* 01:57:34 - Rhetorical Innovation* 02:00:49 - Varieties of Doom* 02:01:57 - The Pope Offers Wisdom* 02:03:42 - Aligning a Smarter Than Human Intelligence is Difficult* 02:06:04 - Messages From Janusworld* 02:13:43 - The Lighter SideThe Don’t Worry About the Vase Podcast is a listener-supported podcast. To receive new posts and support the cost of creation, consider becoming a free or paid subscriber.https://open.substack.com/pub/thezvi/p/ai-143-everything-everywhere-all?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Get full access to DWAtV Podcast at dwatvpodcast.substack.com/subscribe