PodcastsTechnologyChain of Thought

Chain of Thought

Conor Bronsdon
Chain of Thought
Latest episode

52 episodes

  • Chain of Thought

    I Built an AI Coworker That Runs 90% of My Day

    04/03/2026 | 1h 2 mins.
    Sterling Chin stopped thinking of AI as a tool and started treating it like a junior employee. Onboarded it with context, corrected its mistakes, and gave it writing rules.
    Forty days later, MARVIN was handling 90% of his workday.
    In this episode of Chain of Thought, Sterling (Applied AI Engineer and Senior Developer Advocate at Postman) walks through live demos of MARVIN, his personal AI assistant built on Claude Code. From pulling meeting transcripts and updating Jira tickets to drafting blog posts and managing his calendar, MARVIN runs as a full-time AI chief of staff.
    We cover:
    How MARVIN bookends Sterling's workday from first login to the end of the day
    Personality, sub-agents, and writing rules that make MARVIN an effective co-worker
    Automating meeting notes to Jira tickets
    Why DIY assistants outperform big tech alternatives
    How Sterling onboarded 12+ colleagues at Postman, including non-technical knowledge workers
    What the compute crunch means for open source AI
    Connect with Sterling:
    LinkedIn: https://www.linkedin.com/in/sterlingchin/
    Twitter/X: https://x.com/SilverJaw82
    MARVIN Template: https://github.com/SterlingChin/marvin-template

    Connect with Conor:
    Newsletter:⁠ ⁠https://conorbronsdon.substack.com/
    Twitter/X:⁠ https://x.com/ConorBronsdon⁠
    LinkedIn:⁠ https://www.linkedin.com/in/conorbronsdon
    YouTube:⁠⁠ https://www.youtube.com/@ConorBronsdon⁠⁠

    šŸ”— More episodes:⁠⁠ https://chainofthought.show⁠⁠

    Timestamps:
    (0:00) Intro
    (0:28) Meet Sterling Chin and the MARVIN AI Assistant
    (9:10) Live Demo: How MARVIN Bookends Your Workday
    (16:04) Personality, Sub-Agents, and Writing Rules
    (22:00) Automating Meeting Notes to Jira Tickets
    (29:30) Why DIY AI Assistants Outperform Big Tech
    (40:55) Treat Your AI Like a Junior Employee
    (46:41) How to Get Started with MARVIN
    (55:36) The Compute Crunch and Open Source Future

    Thanks to Galileo — download their free 165-page guide to mastering multi-agent systems at galileo.ai/mastering-multi-agent-systems
  • Chain of Thought

    How Intercom Cut $250K/Month by Ditching GPT for Qwen

    26/02/2026 | 53 mins.
    Intercom was spending $250K/month on a single summarization task using GPT. Then they replaced it with a fine-tuned 14B parameter Qwen model and saved almost all of it. In this episode, Intercom's Chief AI Officer, Fergal Reid, walks through exactly how they made that call, where their approach has changed over time, and how all of their efforts built their Fin customer service agent.
    Fergal breaks down how Fin went from 30% to nearly 70% resolution rate and why most of those gains came from surrounding systems (custom re-rankers, retrieval models, query canonicalization), not the core frontier LLM. He explains why higher latency counterintuitively increases resolution rates, how they built a custom re-ranker that outperformed Cohere using ModernBERT, and why he believes vertically integrated AI products will win in the long term.
    If you're deciding between fine-tuning open-weight models and using frontier APIs in production, you won't find a more detailed decision process walkthrough.
    šŸ”— Connect with Fergal:Ā 
    Twitter/X: https://x.com/fergal_reid

    LinkedIn: https://www.linkedin.com/in/fergalreid/

    Fin: https://fin.ai/

    šŸ”— Connect with Conor:
    YouTube: https://www.youtube.com/@ConorBronsdon

    Newsletter: https://conorbronsdon.substack.com/

    Twitter/X: https://x.com/ConorBronsdon

    LinkedIn: https://www.linkedin.com/in/conorbronsdon/

    šŸ”— More episodes: https://chainofthought.showCHAPTERS
    0:00 Intro
    0:46 Why Intercom Completely Reversed Their Fine-Tuning Position
    8:00 The $250K/Month Summarization Task (Query Canonicalization)
    11:25 Training Infrastructure: H200s, LoRA to Full SFT, and GRPO
    14:09 Why Qwen Models Specifically Work for Production
    18:03 Goodhart's Law: When Benchmarks Lie
    19:47 A/B Testing AI in Production: Soft vs. Hard Resolutions
    25:09 The Latency Paradox: Why Slower Responses Get More Resolutions
    26:33 Why Per-Customer Prompt Branching Is Technical Debt
    28:51 Sponsor: Galileo
    29:36 Hiring Scientists, Not Just Engineers
    32:15 Context Engineering: Intercom's Full RAG Pipeline
    35:35 Customer Agent, Voice, and What's Next for Fin
    39:30 Vertical Integration: Can App Companies Outrun the Labs?
    47:45 When Engineers Laughed at Claude Code
    52:23 Closing Thoughts
    TAGSFergal Reid, Intercom, Fin AI agent, open-weight models, Qwen models, fine-tuning LLMs, post-training, RAG pipeline, customer service AI, GRPO reinforcement learning, A/B testing AI, Claude Code, vertical AI integration, inference cost optimization, context engineering, AI agents, ModernBERT reranker, scaling AI teams, Conor Bronsdon, Chain of Thought
  • Chain of Thought

    How Block Deployed AI Agents to 12,000 Employees in 8 Weeks w/ MCP | Angie Jones

    21/01/2026 | 50 mins.
    How do you deploy AI agents to 12,000 employees in just 8 weeks? How do you do it safely? Angie Jones, VP of Engineering for AI Tools and Enablement at Block, joins the show to share exactly how her team pulled it off.

    Block (the company behind Square and Cash App) became an early adopter of Model Context Protocol (MCP) and built Goose, their open-source AI agent that's now a reference implementation for the Agentic AI Foundation. Angie shares the challenges they faced, the security guardrails they built, and why letting employees choose their own models was critical to adoption.

    We also dive into vibe coding (including Angie's experience watching Jack Dorsey vibe code a feature in 2 hours), how non-engineers are building their own tools, and what MCP unlocks when you connect multiple systems together.

    Chapters:
    00:00 Introduction
    02:02 How Block deployed AI agents to 12,000 employees
    05:04 Challenges with MCP adoption and security at scale
    07:10 Why Block supports multiple AI models (Claude, GPT, Gemini)
    08:40 Open source models and local LLM usage
    09:58 Measuring velocity gains across the organization
    10:49 Vibe coding: Benefits, risks & Jack Dorsey's 2-hour feature build
    13:46 Block's contributions to the MCP protocol
    14:38 MCP in action: Incident management + GitHub workflow demo
    15:52 Addressing MCP criticism and security concerns
    18:41 The Agentic AI Foundation announcement (Block, Anthropic, OpenAI, Google, Microsoft)
    21:46 AI democratization: Non-engineers building MCP servers
    24:11 How to get started with MCP and prompting tips
    25:42 Security guardrails for enterprise AI deployment
    29:25 Tool annotations and human-in-the-loop controls
    30:22 OAuth and authentication in Goose
    32:11 Use cases: Engineering, data analysis, fraud detection
    35:22 Goose in Slack: Bug detection and PR creation in 5 minutes
    38:05 Goose vs Claude Code: Open source, model-agnostic philosophy
    38:17 Live Demo: Council of Minds MCP server (9-persona debate)
    45:52 What's next for Goose: IDE support, ACP, and the $100K contributor grant
    47:57 Where to get started with Goose

    Connect with Angie on LinkedIn: https://www.linkedin.com/in/angiejones/
    Angie's Website: https://angiejones.tech/
    Follow Angie on X: https://x.com/techgirl1908
    Goose GitHub: https://github.com/block/goose

    Connect with Conor on LinkedIn: https://www.linkedin.com/in/conorbronsdon/
    Follow Conor on X: https://x.com/conorbronsdon
    Modular: https://www.modular.com/

    Presented By: Galileo AI
    Download Galileo's Mastering Multi-Agent Systems for free here: https://galileo.ai/mastering-multi-agent-systems

    Topics Covered:
    - How Block deployed Goose to all 12,000 employees
    - Building enterprise security guardrails for AI agents
    - Model Context Protocol (MCP) deep dive
    - Vibe coding benefits and risks
    - The Agentic AI Foundation (Block, Anthropic, OpenAI, Google, Microsoft, AWS)
    - MCP sampling and the Council of Minds demo
    - OAuth authentication for MCP servers
    - Goose vs Claude Code and other AI coding tools
    - Non-engineers building AI tools
    - Fraud detection with AI agents
    - Goose in Slack for real-time bug fixing
  • Chain of Thought

    Gemini 3 & Robot Dogs: Inside Google DeepMind's AI Experiments | Paige Bailey

    14/01/2026 | 50 mins.
    Google DeepMind is reshaping the AI landscape with an unprecedented wave of releases—from Gemini 3 to robotics and even data centers in space.
    Paige Bailey, AI Developer Relations Lead at Google DeepMind, joins us to break down the full Google AI ecosystem. From her unique journey as a geophysicist-turned-AI-leader who helped ship GitHub Copilot, to now running developer experience for DeepMind's entire platform, Paige offers an insider's view of how Google is thinking about the future of AI.
    The conversation covers the practical differences between Gemini 3 Pro and Flash, when to use the open-source Gemma models, and how tools like Anti-Gravity IDE, Jules, and Gemini CLI fit into developer workflows. Paige also demonstrates Space Math Academy—a gamified NASA curriculum she built using AI Studio, Colab, and Anti-Gravity—showing how modern AI tools enable rapid prototyping.

    The discussion then ventures into AI's physical frontier: robotics powered by Gemini on Raspberry Pi, Google's robotics trusted tester program, and the ambitious Project Suncatcher exploring data centers in space.
    00:00 Introduction
    01:30 Paige's Background & Connection to Modular
    02:29 Gemini Integration Across Google Products
    03:04 Jules, Gemini CLI & Anti-Gravity IDE Overview
    03:48 Gemini 3 Flash vs Pro: Live Demo & Pricing
    06:10 Choosing the Right Gemini Model
    09:42 Google's Hardware Advantage: TPUs & JAX
    10:16 TensorFlow History & Evolution to JAX
    11:45 NeurIPS 2025 & Google's Research Culture
    14:40 Google Brain to DeepMind: The Merger Story
    15:24 Palm II to Gemini: Scaling from 40 People
    18:42 Gemma Open Source Models
    20:46 Anti-Gravity IDE Deep Dive
    23:53 MCP Protocol & Chrome DevTools Integration
    26:57 Gemini CLI in Google Colab
    28:00 Image Generation & AI Studio Traffic Spikes
    28:46 Space Math Academy: Gamified NASA Curriculum
    31:31 Vibe Coding: Building with AI Studio & Anti-Gravity
    36:02 AI From Bits to Atoms: The Robotics Frontier
    36:40 Stanford Puppers: Gemini on Raspberry Pi Robots
    38:35 Google's Robotics Trusted Tester Program
    40:59 AI in Scientific Research & Automation
    42:25 Project Suncatcher: Data Centers in Space
    45:00 Sustainable AI Infrastructure
    47:14 Non-Dystopian Sci-Fi Futures
    47:48 Closing Thoughts & Resources

    - Connect with Paige on LinkedIn: https://www.linkedin.com/in/dynamicwebpaige/
    - Follow Paige on X: https://x.com/DynamicWebPaige
    - Paige's Website: https://webpaige.dev/
    - Google DeepMind: https://deepmind.google/
    - AI Studio: https://ai.google.dev

    Connect with our host Conor Bronsdon:
    - Substack – https://conorbronsdon.substack.com/
    - LinkedIn https://www.linkedin.com/in/conorbronsdon/

    Presented By: Galileo.ai
    Download Galileo's Mastering Multi-Agent Systems for free here!: https://galileo.ai/mastering-multi-agent-systems

    Topics Covered:
    - Gemini 3 Pro vs Flash comparison (pricing, speed, capabilities)
    - When to use Gemma open-source models
    - Anti-Gravity IDE, Jules, and Gemini CLI workflows
    - Google's TPU hardware advantage
    - History of TensorFlow, JAX, and Google Brain
    - Space Math Academy demo (gamified education)
    - AI-powered robotics (Stanford Puppers on Raspberry Pi)
    - Project Suncatcher (orbital data centers)
  • Chain of Thought

    The Future of AI Development: Gemini, Robotics & Space | Google DeepMind's Paige Bailey

    14/01/2026 | 50 mins.
    Google DeepMind is reshaping the AI landscape with an unprecedented wave of releases—from Gemini 3 to robotics and even data centers in space.

    Paige Bailey, AI Developer Relations Lead at Google DeepMind, joins the conversation to break down the full Google AI ecosystem. From her unique journey as a geophysicist-turned-AI-leader who helped ship GitHub Copilot, to now running developer experience for DeepMind's entire platform, Paige offers an insider's view of how Google is thinking about the future of AI.

    The conversation covers the practical differences between Gemini 3 Pro and Flash, when to use the open-source Gemma models, and how tools like Anti-Gravity IDE, Jules, and Gemini CLI fit into developer workflows. Paige also demonstrates Space Math Academy—a gamified NASA curriculum she built using AI Studio, Colab, and Anti-Gravity—showing how modern AI tools enable rapid prototyping. The discussion then ventures into AI's physical frontier: robotics powered by Gemini on Raspberry Pi, Google's robotics trusted tester program, and the ambitious Project Suncatcher exploring data centers in space.

    Connect with Paige on LinkedIn: https://www.linkedin.com/in/dynamicwebpaige/
    Follow Paige on X: https://x.com/DynamicWebPaige
    Paige's Website: https://webpaige.dev/
    Google DeepMind: https://deepmind.google/
    AI Studio: https://ai.google.dev

    00:00 Introduction
    01:30 Paige's Background & Connection to Modular
    02:29 Gemini Integration Across Google Products
    03:04 Jules, Gemini CLI & Anti-Gravity IDE Overview
    03:48 Gemini 3 Flash vs Pro: Live Demo & Pricing
    06:10 Choosing the Right Gemini Model
    09:42 Google's Hardware Advantage: TPUs & JAX
    10:16 TensorFlow History & Evolution to JAX
    11:45 NeurIPS 2025 & Google's Research Culture
    14:40 Google Brain to DeepMind: The Merger Story
    15:24 Palm II to Gemini: Scaling from 40 People
    18:42 Gemma Open Source Models
    20:46 Anti-Gravity IDE Deep Dive
    23:53 MCP Protocol & Chrome DevTools Integration
    26:57 Gemini CLI in Google Colab
    28:00 Image Generation & AI Studio Traffic Spikes
    28:46 Space Math Academy: Gamified NASA Curriculum
    31:31 Vibe Coding: Building with AI Studio & Anti-Gravity
    36:02 AI From Bits to Atoms: The Robotics Frontier
    36:40 Stanford Puppers: Gemini on Raspberry Pi Robots
    38:35 Google's Robotics Trusted Tester Program
    40:59 AI in Scientific Research & Automation
    42:25 Project Suncatcher: Data Centers in Space
    45:00 Sustainable AI Infrastructure
    47:14 Non-Dystopian Sci-Fi Futures
    47:48 Closing Thoughts & Resources

    Presented by: Galileo.ai
    Download Galileo's Mastering Multi-Agent Systems for free!: https://galileo.ai/mastering-multi-agent-systems

    Topics Covered:
    - Gemini 3 Pro vs Flash comparison (pricing, speed, capabilities)
    - When to use Gemma open-source models
    - Anti-Gravity IDE, Jules, and Gemini CLI workflows
    - Google's TPU hardware advantage
    - History of TensorFlow, JAX, and Google Brain
    - Space Math Academy demo (gamified education)
    - AI-powered robotics (Stanford Puppers on Raspberry Pi)
    - Project Suncatcher (orbital data centers)

More Technology podcasts

About Chain of Thought

AI is reshaping infrastructure, strategy, and entire industries. Host Conor Bronsdon talks to the engineers, founders, and researchers building breakthrough AI systems about what it actually takes to ship AI in production, where the opportunities lie, and how leaders should think about the strategic bets ahead. Chain of Thought translates technical depth into actionable insights for builders and decision-makers. New episodes bi-weekly. Conor Bronsdon is an angel investor in AI and dev tools, Head of Technical Ecosystem at Modular, and previously led growth at AI startups Galileo and LinearB.
Podcast website

Listen to Chain of Thought, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Chain of Thought: Podcasts in Family

Social
v8.7.2 | Ā© 2007-2026 radio.de GmbH
Generated: 3/7/2026 - 7:10:27 PM