PodcastsTechnologyUnsupervised Learning with Jacob Effron

Unsupervised Learning with Jacob Effron

by Redpoint Ventures
Unsupervised Learning with Jacob Effron
Latest episode

93 episodes

  • Unsupervised Learning with Jacob Effron

    Ep 86: Yann LeCun on Leaving Meta, Breaking The LLM Paradigm, & Why Hinton is Wrong

    15/05/2026 | 1h 21 mins.
    Yann LeCun, Turing Award winner and former Chief AI Scientist at Meta, joins Jacob Effron. The conversation centers on Yann's contrarian thesis that LLMs are a dead-end on the path to human-level intelligence, despite being useful products — because they can't predict the consequences of their actions, can't plan, and fundamentally can't model the messy, high-dimensional real world. He unpacks his alternative architecture, JEPA (Joint Embedding Predictive Architecture), which learns abstract representations rather than generating pixel-level predictions, and explains why this approach is essential for robotics, industrial applications, and any system that needs to operate beyond the substrate of language. Yann also reveals the real story behind his departure from Meta (he had zero technical influence on Llama, contrary to public narrative), the genesis of his Tapestry project for sovereign open-source AI, why he believes LLMs are intrinsically unsafe, where he diverges from his fellow Turing laureates Hinton and Bengio, and why he predicts the industry will recognize the paradigm shift by early 2027. Throughout, he offers candid reflections on the tension between research and product at major labs, and why he intentionally headquartered AMI Labs in Paris with zero Silicon Valley VC money.

     

    (0:00) Introduction 

    (01:45) Why LLMs Aren't the Path to Intelligence 

    (07:51) AMI and World Models 

    (12:07) The JEPA Architecture Explained 

    (15:55) Problems with Robotics Models Today 

    (20:37) Silicon Valley Herd Behavior 

    (28:18) Tapestry: Sovereign AI for the Rest of the World 

    (35:49) OpenAI Is the Next Sun Microsystems 

    (40:51) Why Yann's Views Diverged from Hinton & Bengio 

    (44:32) LLMs Are Intrinsically Unsafe 

    (58:00) Why Yann Left Meta 

    (1:00:26) Reflections on FAIR 

    (1:12:11) Advice for PhD Students

     

    LeWorldModel Paper: https://arxiv.org/abs/2603.19312

     

    With your host: 

    @jacobeffron 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 85: Has AI Infra Stabilized, FM Vibe Shift, & What's Next for Coding Agents

    23/04/2026 | 54 mins.
    This episode is a wide-ranging conversation between Jacob and Swyx (Shawn Wang), an AI engineer, podcaster, and now operator at Cognition, who sits at a uniquely informed intersection of builder, investor, and community organizer in the AI world. The two cover the current state of the AI engineering zeitgeist: from the stabilization of agent infrastructure and the surprising stickiness of Claude Code, to the competitive dynamics of the AI coding wars, the rise of open models, the threat to traditional SaaS, and the frontier questions around world models, memory, and what it actually means for AI to "understand" something. The episode is grounded in practitioner-level candor, with Swyx offering real takes from running AIE conferences, working inside Cognition, and thinking deeply about what the next wave of AI-native software development looks like.

     

    (0:00) Intro

    (1:17) What the Top AI Engineers Are Thinking About

    (2:13) Has AI Infra Finally Stabilized?

    (6:39) When Does Doing RL In-House Make Sense?

    (11:26) Why Selling Dev Tools to Agents is Different

    (17:18) AI Coding Wars

    (29:04) Consumer AI Plateau

    (30:22) Codex vs Claude Code

    (44:52) Future of Open Models

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 84: OpenAI’s Chief Scientist on Continual Learning Hype, RL Beyond Code, & Future Alignment Directions

    09/04/2026 | 58 mins.
    Jakub Pachocki, OpenAI's Chief Scientist, sits down with Jacob to cover the full arc of where AI research stands today and where it's headed. The conversation spans the explosive growth of coding agents and what it signals about near-term AI capability, the use of math and physics benchmarks as proxies for general intelligence, how reinforcement learning is being extended beyond easily-verified domains toward longer-horizon tasks, and what it means to run a research organization at the precise moment the models themselves are starting to accelerate the research. Jakub shares a candid take on the competitive landscape, why chain-of-thought monitoring is one of the most promising tools in the alignment toolkit, and — with unusual directness — why the concentration of power enabled by highly automated AI organizations is a societal problem that doesn't yet have an obvious solution.

     

    (0:00) Intro

    (1:53) Research Intern Capability Timelines

    (4:59) Math Breakthroughs

    (7:59) RL Beyond Verifiable Tasks

    (12:32) RL vs In-Context

    (19:01) Allocating Compute Internally

    (28:18) AI for Science

    (31:40) Pattern Matching

    (33:23) Solving the Hardest Math Problems

    (37:40) Chain of Thought Monitoring

    (44:33) Generalization and Value Alignment in Models

    (47:57) Inside OpenAI

    (51:55) Quickfire

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 83: Owning the System of Record, AI-Native Org Charts, & Why ITSM is The Most Vulnerable Legacy Category

    02/04/2026 | 54 mins.
    Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval's velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder's honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won't stay open forever.

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 82: Behind Legora's $550M Raise, Model Competition, Doubling Revenue Every Quarter, & US Expansion

    11/03/2026 | 54 mins.
    Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company's $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.

    Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.

    On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora's depth.

    The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora's compounding advantage.

     

    [0:00] Intro

    [1:16] Legora's Series D Story

    [3:24] Why You Need Low Ego to Build in AI

    [5:58] From 60% to 100% Accuracy in One Summer

    [7:04] Law Firm Economics Shift

    [14:09] Pricing Seats Vs Outcomes

    [18:31] Why Foundation Models Entering Legal Helps Legora

    [30:10] Convincing a 75-Year-Old Partner to Go All In

    [33:02] Hiring Legal Engineers

    [34:32] Running an AI-Native Company

    [35:57] The Opus 4.5 Christmas Breakthrough

    [40:02] Building With Customers

    [44:01] All In On US Expansion

    [51:22] Stockholm Startup DNA

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
More Technology podcasts
About Unsupervised Learning with Jacob Effron
We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral. Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.
Podcast website

Listen to Unsupervised Learning with Jacob Effron, Better Offline and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Unsupervised Learning with Jacob Effron: Podcasts in Family