GAEA Talks

GAEA Talks
GAEA Talks
Latest episode

69 episodes

  • GAEA Talks

    #070 - Building General-Purpose Robot Brains with Field AI CEO Dr Ali Agha

    27/04/2026 | 41 mins.
    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Dr Ali Agha - co-founder and CEO of Field AI, former NASA JPL principal investigator on two of the most ambitious DARPA robotics challenges in history, and one of the leading researchers in the world on risk-aware autonomy.Ali has spent almost two decades building AI for robots. He started with rescue robots and robotics competitions, met his co-founder at MIT, and went on to work at Qualcomm and then NASA JPL, where for seven to eight years he was a principal investigator on two DARPA grand challenges that the global robotics community treats as a holy grail. He and his co-founder realised that deployable robotics and foundation models had become two separate worlds, and that putting them together was the only path to a robot brain that could generalise across environments while staying safe. That insight became Field AI, now running in production on three continents across humanoid, legged, wheeled, drone and heavy-duty platforms.In this episode, recorded live at HumanX 2026 in San Francisco, Ali explains why data alone cannot produce safe physical AI, why architectural innovation and risk awareness are the non-negotiable second half of the equation, and why his team intentionally decoupled the dynamics of the robot body from the world model.What you'll take away from this conversation:- Why the commoditisation of robot hardware is the hidden unlock behind the physical AI boom- The real difference between conversational AI and physical AI - and why "ninety nine percent" is not good enough for a flying machine- Why Field AI separates world model from embodiment - and how that lets one brain run on tens of different platforms- The belief world model - what it is, why it is probabilistic, and why it is physics-aware- Why end-to-end neural network robotics is a debugging nightmare - and why Field AI refused to take that path- How adding a new robot to a fleet creates "ninety-nine new links" of shared learning, not just one extra unit- Why the risk-aware architecture is the reason Field AI can deploy on live construction sites changing minute to minute- Why edge compute, thermal cameras, lidar and event cameras all matter when the lights go out in an industrial setting- The labour shortage, aging population and climate-driven migration numbers reshaping robotics demand- The real construction job statistic - forty thousand injuries and a thousand deaths per year in the US alone- Why the future of robotics is less "Terminator" and more "capacity multiplier for humans"About Dr Ali Agha:Dr Ali Agha is the co-founder and CEO of Field AI, which builds the world's first field-deployable, general-purpose robot brain. He spent seven to eight years at NASA Jet Propulsion Laboratory (JPL), where he was a principal investigator on two of the most recent DARPA robotics challenges, and previously held research roles at Qualcomm after completing his PhD in electrical and computer engineering. Field AI is now live in production across three continents.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Dr Ali Agha on LinkedIn: https://www.linkedin.com/in/aliaghaField AI: https://fieldai.comHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #PhysicalAI #Robotics #FieldAI #AIRobots #RobotBrain #Autonomy #EdgeAI #WorldModels #Humanoid #NASAJPL #DARPA #AIPodcast #GAEAAI
  • GAEA Talks

    #069 - The Multimodal Road to AGI with Luma AI COO Caroline Ingeborn

    24/04/2026 | 35 mins.
    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Caroline Ingeborn - COO of Luma AI, former CEO and co-founder of Leap, former CEO, President and COO of Toca Boca, and one of the most experienced operators in the world of creative technology.Caroline's career has been spent at the crossroads of technology, creativity and product leadership. She helped build Toca Boca into one of the world's most loved kids' creative software companies, co-founded Leap, and is now COO of Luma AI, the foundational AI research lab building multimodal general intelligence. Luma's thesis is that LLMs alone will not reach AGI - intelligence that can reason, operate and create alongside humans has to be unified across language, image, video, 3D and audio. Luma recently launched Uni 1, its first unified model trained jointly on image and language, and has built a product suite - Luma Agents and the Forward Deployed Creatives team - that turns those models into daily tools for the world's top creative professionals.In this episode, recorded live at HumanX 2026 in San Francisco, Caroline explains why the research community's decision to plumb modalities together is now being replaced with truly unified models, what is really happening inside the Dream Brief collaboration with Diane that submitted twenty one AI-generated finalists to Cannes Lions, and why the real story of 2026 is not that AI is replacing creatives - it is that twenty and thirty-year career creatives are now using AI as a creative collaborator.What you'll take away from this conversation:
    Why LLMs alone cannot get us to AGI - and what a unified model really looks like
    Inside Uni 1 - Luma's first jointly trained image and language model - and why it matters for the path to AGI
    The two shifts happening right now in creative AI - and why they are compounding
    Why no one needs to become a prompt engineer any more - and what takes its place
    Why the next decade belongs to people who have spent twenty or thirty years in the creative industries
    The Dream Brief story - seven hundred AI-generated ads, a million-dollar Cannes Lions prize, and what it proved
    The "creative process is non-linear now" realisation - and what that does to agency economics
    Why Luma's researchers work shoulder-to-shoulder with in-house creatives - and the feedback loop that creates
    How the local car dealership example explains where brand marketing is really heading
    Why the "back to the Future with a different lead actor" example is the perfect lens on AI and risk
    The cultural humility problem with foundation models - and why Luma takes it seriously
    The dreaming across modalities analogy - and why it is the simplest explanation of why multimodal matters
    About Caroline Ingeborn:Caroline Ingeborn is the COO of Luma AI, the foundational research lab and product company building multimodal generative intelligence for creative work. She was previously co-founder and CEO of Leap, and before that CEO, President and COO of Toca Boca, one of the most successful kids' creative technology companies ever built. She is a board member, advisor, investor and entrepreneur-in-residence at several leading technology companies.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Caroline Ingeborn on LinkedIn:   / ingeborn  Luma AI: https://lumalabs.aiHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #LumaAI #Multimodal #AGI #CreativeAI #AIVideo #AIAgents #DreamMachine #CannesLions #AIPodcast #GAEAAI
  • GAEA Talks

    #068 - The Open Source Engine Powering AI with Anyscale's Robert Nishihara

    21/04/2026 | 55 mins.
    This week on GAEA Talks, Graeme Scott sits down with Robert Nishihara - co-founder of Anyscale, creator of the open source Ray project, UC Berkeley PhD in machine learning and distributed systems, Harvard mathematics graduate, and one of the architects of the software infrastructure powering AI at OpenAI, Amazon, Cohere, Hugging Face, NVIDIA, Uber, Spotify and Visa.Robert's journey is the story of how modern AI is actually built. As a PhD student at UC Berkeley working with Michael Jordan and Ion Stoica, he and his co-founders kept hitting the same wall - they wanted to do research on algorithms but ended up spending all their time on distributed systems just to run their experiments. That frustration became Ray, the open source compute framework they built to make distributed AI accessible. In 2019 they founded Anyscale to commercialise Ray, and today it powers mission-critical AI workloads at many of the largest AI companies on earth.In this episode, recorded live at HumanX 2026 in San Francisco, Robert takes us inside the real engineering reality behind the AI boom - from the mindset shift that "the code is not the artifact" to the quiet revolution in data curation that has replaced architecture innovation as the frontier of model quality. He explains why the thirty-year lag from demo to production still haunts robotics and AI, why every serious AI company now runs across hyperscalers and neoclouds to scrounge for capacity, how teams manage rack-level GPU failures with "bad GPU" lists and suspected-bad lists, and why learning outside the model - through context engineering - may matter as much as training itself. This is essential listening for anyone building, funding, or betting on the infrastructure that will decide the next phase of AI.What you'll take away from this conversation:- The "code is not the artifact" mindset shift - why AI research code can be throwaway because the model, not the software, is the real deliverable- Why the thirty-year gap from demo to production is the defining challenge of AI reliability - and why autonomous driving is the canonical example- How data curation and synthetic data generation have quietly replaced architectures and optimisers as the true frontier of model quality- Why reinforcement learning is the next scaling frontier - data efficient, compute hungry, and a way to keep scaling when labelled data plateaus- Why the next leap in intelligence will come from learning outside the model - context engineering, mental models, and closing the reasoning-to-learning loop- The hardware reality no one talks about - 72-GPU racks, long-tail failure rates, and the scheduling gymnastics required to run unreliable hardware reliably- The "bad GPU" and "suspected-bad GPU" lists production teams actually maintain to keep training jobs alive- Why every serious AI team now runs across a hyperscaler and one or more neoclouds - and why advertised cloud capacity is effectively fiction- Why training and inference must share compute - statically partitioning your cluster is a cost trap that hits you at peak inference demand- Why text is a minuscule fraction of the world's data - and the shift from SQL on tabular data to inference on arbitrary data types will happen fast- Why the infrastructure team has to optimise for performance, cost AND researcher productivity - and why velocity is often what separates winners from losers- Robert's two biggest bets for the next wave of AI - compute-driven data generation, and systems that learn outside the model weights
  • GAEA Talks

    #067 - How AMD Plans to Win The AI Era with AMD CTO Mark Papermaster

    19/04/2026 | 46 mins.
    This week on GAEA Talks, Graeme Scott sits down with Mark Papermaster - Chief Technology Officer and Executive Vice President of AMD, former Apple Senior Vice President of iPhone and iPod Hardware Engineering, four-decade semiconductor industry veteran, and newly elected member of the National Academy of Engineering.Mark's career reads like a history of modern computing itself. Beginning at IBM in 1982, he spent twenty-six years driving microprocessor and server technology development before being hired by Steve Jobs to lead iPhone and iPod hardware engineering at Apple. He went on to lead silicon engineering at Cisco before joining AMD in 2011, where he and CEO Lisa Su have transformed the company into one of the world's most formidable forces in high-performance and AI computing. A graduate of the University of Texas at Austin and the University of Vermont in electrical engineering, Mark was elected to the National Academy of Engineering in February 2025.In this episode, recorded live at HumanX 2026 in San Francisco, Mark takes the audience inside four decades of computing revolutions - from the birth of the PC era through the iPhone moment with Steve Jobs, to the AI infrastructure race reshaping every industry today. He reveals what it was like going back and forth with Steve Jobs on the angle of the FaceTime camera, why AMD's open ecosystem approach is essential for the security challenges ahead, and why the democratisation of AI compute is a societal necessity. This is essential listening for anyone making decisions about AI infrastructure, edge computing, or the future of distributed intelligence.What you'll take away from this conversation:- The full arc of computing revolutions - from mainframes to PCs to mobile to AI - told by someone who built the hardware behind each one- What Steve Jobs taught Mark about maniacal focus on experience - and how that drives AMD's chip design culture- The FaceTime story - why Jobs obsessed over the camera angle and what that reveals about trust in new technology- Why AI compute will be aggregated, not centralised - running in the cloud, on your PC, your phone, and embedded all around us- AMD's confidential compute - how businesses can run AI on the cloud while controlling the encryption keys- Why the lack of security standards for agentic AI processes is a critical gap the industry must address- How AMD's open software stack runs from the world's top supercomputers down to consumer PCs- The Strix Halo revelation - AMD's PC chip running hundreds of billions of parameter models at retail- AMD's target of a 20x improvement in AI compute efficiency in the data centre by 2030- Why democratising AI computation is a societal imperative - and how the divide is already forming- The culture of execution Mark and Lisa Su built at AMD- The collaboration imperative - why no single company can solve the AI security stack aloneAbout Mark Papermaster: Mark is CTO and EVP of Technology and Engineering at AMD since 2011. He leads development of the Zen CPU family, high-performance GPUs, and Infinity Architecture. Previously Apple SVP of iPhone and iPod Hardware, VP at Cisco, and 26 years at IBM. He holds a BSc from UT Austin and MSc from the University of Vermont in Electrical Engineering. Elected to the National Academy of Engineering in 2025.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.AMD: https://www.amd.com/en/corporate/leadership/mark-papermaster.htmlGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #EnterpriseAI #AMD #Semiconductors #AICompute #EdgeComputing #DistributedAI #SteveJobs #iPhone #FaceTime #HumanX #HumanX2026 #ConfidentialCompute #DemocratiseAI #FutureOfComputing #DataCentre #GPUs #CTO #Leadership #TechPodcast
  • GAEA Talks

    #064 - Four Empires. One Witness. With Dex Hunter-Torricke

    19/04/2026 | 1h 8 mins.
    This week on GAEA Talks, Graeme Scott sits down with Dex Hunter-Torricke - former speechwriter to the UN Secretary-General, fifteen-year Big Tech veteran who worked for Eric Schmidt, Mark Zuckerberg and Elon Musk, former Head of Global Communications at Google DeepMind, Cambridge Visiting Research Fellow, and founder of The Center for Tomorrow.Dex began his career as a speechwriter in the Executive Office of UN Secretary-General Ban Ki-moon before spending fifteen years at the heart of the tech industry. He served as Google's first executive speechwriter for Larry Page and Eric Schmidt, managed communications for Zuckerberg at Facebook and Musk at SpaceX, and led global communications for Google DeepMind. A graduate of University College London and the University of Oxford, he is now a Cambridge Visiting Research Fellow. In 2026 he launched The Center for Tomorrow, a nonprofit focused on the systemic risks of advanced AI that does not accept Big Tech funding.In this episode, Dex delivers one of the most powerful and deeply human conversations GAEA Talks has ever recorded. Drawing on a childhood shaped by a refugee father and an immigrant mother, he challenges the idea that AI is a technology problem and reframes it as a civilisational choice about who we want to become. He argues that the world's institutions are failing, that most leaders have no vision beyond an incrementally updated past, and that the gap between winners and losers in the AI transition is becoming an abyss. But he refuses to accept hopelessness - making the case that these technologies could liberate all of us if we choose to harness them deliberately. This is essential listening for anyone who believes the future is not a tidal wave but a choice.What you'll take away from this conversation:- Why Dex says the future is not a tidal wave or an asteroid - and why framing it that way is a failure of leadership and imagination- The civilisational choice - why AI will either amplify existing dysfunctions and injustices or allow us to build something profoundly hopeful- Why seven out of ten Americans and over half the UK population live paycheck to paycheck despite decades of technological transformation- The techno-colonialism warning - what happens when Washington and Beijing control AGI, quantum and fusion and say no to the rest of the world- Why the UK has had no real economic growth for fifteen years despite access to the same technologies as every other advanced economy- The digital divide is really a societal divide - and in the age of AI it is becoming an abyss- Why Dex left Big Tech after fifteen years to launch The Center for Tomorrow and why it refuses Big Tech funding- The liberation argument - what if AI could free people from settling and let them become who they were meant to be- Why every leader and organisation must now become an expert on a changing society, regardless of their field- The convenience debt - why society is accruing massive technical and societal debt that will soon come due- Why most political leaders have no vision at all and their version of the future is just something from the past slightly updated- How democratised, privacy-first, edge-based AI could return control to individuals and break the dependency on a handful of centralised providers- The Star Trek test - why any leader should be required to declare what kind of world they would build if given the chance- Why Dex got a room full of bankers to applaud the idea that AI should liberate people from jobs that never gave them meaning

More Technology podcasts

About GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove
Podcast website

Listen to GAEA Talks, Everything Is Fake and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 4/27/2026 - 10:25:24 PM