Powered by RND
PodcastsScienceThe Jim Rutt Show

The Jim Rutt Show

The Jim Rutt Show
The Jim Rutt Show
Latest episode

Available Episodes

5 of 442
  • EP 327 Nate Soares on Why Superhuman AI Would Kill Us All
    Jim talks with Nate Soares about the ideas in his and Eliezer Yudkowsky's book If Anybody Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. They discuss the book's claim that mitigating existential AI risk should be a top global priority, the idea that LLMs are grown, the opacity of deep learning networks, the Golden Gate activation vector, whether our understanding of deep learning networks might improve enough to prevent catastrophe, goodness as a narrow target, the alignment problem, the problem of pointing minds, whether LLMs are just stochastic parrots, why predicting a corpus often requires more mental machinery than creating a corpus, depth & generalization of skills, wanting as an effective strategy, goal orientation, limitations of training goal pursuit, transient limitations of current AI, protein folding and AlphaFold, the riskiness of automating alignment research, the correlation between capability and more coherent drives, why the authors anchored their argument on transformers & LLMs, the inversion of Moravec's paradox, the geopolitical multipolar trap, making world leaders aware of the issues, a treaty to ban the race to superintelligence, the specific terms of the proposed treaty, a comparison with banning uranium enrichment, why Jim tentatively thinks this proposal is a mistake, a priesthood of the power supply, whether attention is a zero-sum game, and much more. Episode Transcript If Anybody Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares "Psyop or Insanity or ...? Peter Thiel, the Antichrist, and Our Collapsing Epistemic Commons," by Jim Rutt "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback," by Marcus Williams et al. Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin," by Enrique Queipo-de-Llano et al. JRS EP 217 - Ben Goertzel on a New Framework for AGI "A Tentative Draft of a Treaty, With Annotations" Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.
    --------  
    1:37:07
  • EP 326 Alex Ebert on New Age, Manifestation, and Collective Hallucination
    Jim talks with Alex Ebert about the ideas in his Substack essay "New Age and the Religion of Self: The Anatomy of a Rebellion Against Reality." They discuss the meanings of New Age and religion, the New Thought movement, the law of attraction, manifesting, Trump's artifacts of manifestation, the unmooring from concrete artifacts, individual and collective hallucinations, intersubjective verification of the interobjective, the subjective-first perspective, epistemic asymmetry as the cool, New Ageism's constant reference to quantum physics, manifesting as a way to negate social responsibility, the odd coincidence of leaving the gold standard and New Ageism, spiritual bypassing, a global derealization, new retribalized collective delusions, the Faustian bargain of AI, rationality as a virus, the noble lie, indeterminacy as a sign of emergence, nostalgia as a sales pitch, regaining the sense of hypocrisy, localized retribalizations, GameB as a series of membranes, and much more. Episode Transcript "New Age and the Religion of Self: The Anatomy of a Rebellion Against Reality," by Alex Ebert Bad Guru (Alex's Substack) Jim Rutt's Substack "Unclear Thinking About Philosophical Zombies and Quantum Measurement," by Jim Rutt The Century of the Self (documentary by Adam Curtis) Alex Ebert is a platinum-selling musician (Edward Sharpe and The Magnetic Zeros), Golden Globe-winning film composer, cultural critic and philosopher living in New Orleans. His philosophical project, FreQ Theory, as well as his cultural analyses, can be followed on his Substack.
    --------  
    1:13:47
  • EP 325 Joe Edelman on Full-Stack AI Alignment
    Jim talks with Joe Edelman about the ideas in the Meaning Alignment Institute's recent paper "Full Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value." They discuss pluralism as a core principle in designing social systems, the informational basis for alignment, how preferential models fail to capture what people truly care about, the limitations of markets and voting as preference-based systems, critiques of text-based approaches in LLMs, thick models of value, values as attentional policies, AI assistants as potential vectors for manipulation, the need for reputation systems and factual grounding, the "super negotiator" project for better contract negotiation, multipolar traps, moral graph elicitation, starting with membranes, Moloch-free zones, unintended consequences and lessons from early Internet optimism, concentration of power as a key danger, co-optation risks, and much more. Episode Transcript "A Minimum Viable Metaphysics," by Jim Rutt (Substack) Jim's Substack JRS Currents 080: Joe Edelman and Ellie Hain on Rebuilding Meaning Meaning Alignment Institute If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares "Full Stack Alignment: Co-aligning AI and Institutions with Thick Models of Value," by Joe Edelman et al. "What Are Human Values and How Do We Align AI to Them?" by Oliver Klingefjord, Ryan Lowe, and Joe Edelman Joe Edelman has spent much of his life trying to understand how ML systems and markets could change, retaining their many benefits but avoiding their characteristic problems: of atomization, and of servicing shallow desires over deeper needs. Along the way this led him to formulate theories of human meaning and values (https://arxiv.org/abs/2404.10636) and study models of societal transformation (https://www.full-stack-alignment.ai/paper) as well as inventing the meaning-based metrics used at CouchSurfing, Facebook, and Apple, co-founding the Center for Humane Technology and the Meaning Alignment Institute, and inventing new democratic systems (https://arxiv.org/abs/2404.10636). He’s currently one of the PIs leading the Full-Stack Alignment program at the Meaning Alignment Institute, with a network of more than 50 researchers at universities and corporate labs working on these issues.
    --------  
    1:12:12
  • EP 324 John Preston on 40 Flushes to Grow Your Business
    Jim talks with John Preston about his book 40 Flushes to Grow Your Business: The World's #2 Business Series, which is designed to be read during bathroom breaks. They discuss breaking free from being a one-person show, hiring self-guided employees, the importance of business owner support networks, clarity on business goals & personal objectives, the five-gear growth machine business metrics model, marketing fundamentals & investment levels, understanding the customer journey, social media pitfalls, customer inquiry response strategies, complaint management, CEO time management & delegation, working capital needs, lifestyle creep, measuring business metrics, gross profit vs net profit, building high-trust company cultures, transparency with employees, marketing strategies & customer acquisition, hiring & retention strategies, and much more. Episode Transcript 40 Flushes to Grow Your Business: The World's #2 Business Series, by John Preston Start with Why: How Great Leaders Inspire Everyone to Take Action, by Simon Sinek The JP Business Academy John Preston is a Hall of Fame sales and business coach who transforms complex concepts into actionable insights for entrepreneurs and sales teams, drawing from his 22+ years as a television news reporter and producer. As the creator of JP Business Academy, he specializes in making business education accessible through live, engaging training sessions and online teaching. He can be reached by email at john@thejpbusinessacademy.
    --------  
    1:34:05
  • EP 323 Pablos Holman on Deep Tech
    Jim talks with Pablos Holman about the ideas in his new book Deep Future: Creating Technology That Matters. They discuss deep tech versus shallow tech, computational modeling and simulation for real-world problems, the hacker mindset, the role of inventors, nuclear power and renewable energy solutions, population growth, development challenges, space-based solar power, the likelihood of fusion power, mistakes in German energy policy, energy storage limitations, the transformation of the apparel industry through automation, and much more. Episode Transcript Deep Future: Creating Technology That Matters, by Pablos Holman Deep Future (company) Intellectual Ventures Lab Pablos is a hacker, inventor, and bestselling author of Deep Future: Creating Technology that Matters, the indispensable guide to deep tech. Now Managing Partner at Deep Future, investing in technologies to solve the world’s biggest problems. Previously, Pablos worked on spaceships at Blue Origin and helped build The Intellectual Ventures Lab to invent a wide variety of breakthroughs including a brain surgery tool, a machine to suppress hurricanes, 3D food printers, and a laser that can shoot down mosquitos—part of an impact invention effort to eradicate malaria with Bill Gates. Pablos hosts the Deep Future Podcast and is a top public speaker—his talks have over 30 million views.
    --------  
    1:27:45

More Science podcasts

About The Jim Rutt Show

Crisp conversations with critical thinkers at the leading edge of science, technology, politics, and social systems.
Podcast website

Listen to The Jim Rutt Show, The Life Scientific and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/17/2025 - 8:19:04 PM