PodcastsBusinessThe AI Fundamentalists

The AI Fundamentalists

Dr. Andrew Clark & Dr. Sid Mangalik
The AI Fundamentalists
Latest episode

46 episodes

  • The AI Fundamentalists

    AI and the lost art of reading

    03/03/2026 | 46 mins.
    As information sources have become abundant and attention spans have shortened in the age of AI, we take on the lost art of reading. Join us to explore why reading rates are falling, how that shift affects judgment and opportunity, and how interdisciplinary books help us see patterns across history, economics, and technology. 
    To help us, Alisa Rusanoff, CEO of Eltech AI, joins us to share her perspective on reading, debate volume versus depth, and offer practical ways to reclaim attention and read with intention.
    Evidence on declining reading rates among adults, teens and children
    Noise versus signal in the attention economy
    Mental models and interdisciplinary synthesis for better decisions
    AI’s limits and why human integration still matters
    Cycles in debt, trade, demography, and geopolitics
    Fiction as a cultural sensor for lived experience
    Wealth gaps, polarization and the need for critical thinking
    Practical habits to train feeds and protect reading time
    Challenge to read, reflect, and apply insights
    For people worried if they are reading enough:
    Reading just 1 book a year puts you in the top 60% of readers
    Read 4 books a year to be in the top 50% of readers
    Read 10 books a year to be in the top 20% of readers
    For those looking to be in the top 5% of readers, expect to read at least 50 books
    This episode is full of research and fun connections that are sure to make you think positively about your commitment to reading. At the time of this episode, it's not too late to join the top 20% in 2026!

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    Metaphysics and modern AI: What is causality?

    27/01/2026 | 36 mins.
    In this episode of our series about Metaphysics and modern AI, we break causality down to first principles and explain how to tell factual mechanisms from convincing correlations. From gold-standard Randomized Control Trials (RCT) to natural experiments and counterfactuals, we map the tools that build trustworthy models and safer AI.
    Defining causes, effects, and common causal structures
    Gestalt theory: Why correlation misleads and how pattern-seeking tricks us
    Statistical association vs causal explanation
    RCTs and why randomization matters
    Natural experiments as ethical, scalable alternatives
    Judea Pearl’s do-calculus, counterfactuals, and first-principles models
    Limits of causality, sample size, and inference
    Building resilient AI with causal grounding and governance

    This is the fourth episode in our metaphysics series. Each topic in the series is leading to the fundamental question, "Should AI try to think?"
    Check out previous episodes:
    Series Intro
    What is reality?
    What is space and time?
    If conversations like this sharpen your curiosity and help you think more clearly about complex systems, then step away from your keyboard and enjoy this journey with us.

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    Why validity beats scale when building multi‑step AI systems

    06/01/2026 | 40 mins.
    In this episode, Dr. Sebastian (Seb) Benthall joins us to discuss research from his and Andrew's paper entitled “Validity Is What You Need” for agentic AI that actually works in the real world. 
    Our discussion connects systems engineering, mechanism design, and requirements to multi‑step AI that creates enterprise impact to achieve measurable outcomes.
    Defining agentic AI beyond LLM hype
    Limits of scale and the need for multi‑step control
    Tool use, compounding errors, and guardrails
    Systems engineering patterns for AI reliability
    Principal–agent framing for governance
    Mechanism design for multi‑stakeholder alignment
    Requirements engineering as the crux of validity
    Hybrid stacks: LLM interface, deterministic solvers
    Regression testing through model swaps and drift
    Moving from universal copilots to fit‑for‑purpose agents
    You can also catch more of Seb's research on our podcast. Tune in to Contextual integrity and differential privacy: Theory versus application.

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    2025 AI review: Why LLMs stalled and the outlook for 2026

    22/12/2025 | 42 mins.
    Here it is! We review the year where scaling large AI models hit its ceiling, Google reclaimed momentum with efficient vertical integration, and the market shifted from hype to viability. 
    Join us as we talk about why human-in-the-loop is failing, why generative AI agents validating other agents compounds errors, and how small expert data quietly beat the big models.

    • Google’s resurgence with Gemini 3.0 and TPU-driven efficiency
    • Monetization pressures and ads in co-pilot assistants
    • Diminishing returns from LLM scaling
    • Human-in-the-loop pitfalls and incentives
    • Agents vs validation and compounding error
    • Small, high-quality data outperforming synthetic
    • Expert systems, causality, and interpretability
    • Research trends return toward statistical rigor
    • 2026 outlook for ROI, governance, and trust

    We remain focused on the responsible use of AI. And while the market continues to adjust expectations for return on investment from AI, we're excited to see companies exploring "return on purpose" as the new foray into transformative AI systems for their business. 

    What are you excited about for AI in 2026? 

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    Big data, small data, and AI oversight with David Sandberg

    09/12/2025 | 49 mins.
    In this episode, we look at the actuarial principles that make models safer: parallel modeling, small data with provenance, and real-time human supervision. To help us, long-time insurtech and startup advisor David Sandberg, FSA, MAAA, CERA, joins us to share more about his actuarial expertise in data management and AI.

    We also challenge the hype around AI by reframing it as a prediction machine and putting human judgment at the beginning, middle, and end. By the end, you might think about “human-in-the-loop” in a whole new way.

    • Actuarial valuation debates and why parallel models win
    • AI’s real value: enhance and accelerate the growth of human capital
    • Transparency, accountability, and enforceable standards
    • Prediction versus decision and learning from actual-to-expected
    • Small data as interpretable, traceable fuel for insight
    • Drift, regime shifts, and limits of regression and LLMs
    • Mapping decisions, setting risk appetite, and enterprise risk management (ERM) for AI
    • Where humans belong: the beginning, middle, and end of the system
    • Agentic AI complexity versus validated end-to-end systems
    • Training judgment with tools that force critique and citation

    Cultural references:
    Foundation, AppleTV
    The Feeling of Power, Isaac Asimov
    Player Piano, Kurt Vonnegut
    For more information, see Actuarial and data science: Bridging the gap.

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

More Business podcasts

About The AI Fundamentalists

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.
Podcast website

Listen to The AI Fundamentalists, Odd Lots and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.1 | © 2007-2026 radio.de GmbH
Generated: 3/18/2026 - 4:29:35 PM