GAEA Talks

GAEA Talks
GAEA Talks
Latest episode

71 episodes

  • GAEA Talks

    #071 - 40 Million Products Built Without Code with Lovable CEO Anton Osika

    30/04/2026 | 13 mins.
    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Anton Osika - co-founder and CEO of Lovable, the AI app builder that has powered over forty million products in sixteen months, with more than two hundred thousand new products being built on the platform every single day.Anton grew up obsessed with understanding how things work, studied physics, became a CTO of a forty-person AI team, and in 2023 became convinced that large language models would fundamentally change how software was built. He biked over to his future co-founder's apartment, called him from the street, and the two of them started building what would become Lovable. Since launch, Lovable has grown at a rate very few products in software history have matched, and is now used by solo founders, freelancers, product managers inside Microsoft and Uber, and Fortune 500 companies looking to give every one of their employees the ability to go from idea to shipped software.In this episode, recorded live at HumanX 2026 in San Francisco, Anton explains why the build phase is only the beginning, why running software reliably, securely and at scale is the next frontier for AI platforms, and why reading Nick Bostrom ten years ago set him on a path to build tools that could empower the largest possible number of humans.What you'll take away from this conversation:- How Lovable went from idea to forty million products built in sixteen months- The "build phase is only the beginning" realisation - and why lifecycle management is the next great AI platform problem- Why Anton believes software creation is the single highest-leverage capability to democratise- The end-to-end penetration testing layer Lovable now runs before any AI-built app goes live on the internet- Why fortune five hundred adoption is happening faster than anyone expected - and what product managers at Microsoft are actually doing with Lovable- The Grammy-nominated freelancer story - and what it says about the future of small business in America- Why Anton believes this is the best time in history to start a company- The physics-trained instinct for breaking down systems - and how it shapes how Anton builds Lovable- Why empowering non-technical creators is the fastest path to solving more of the world's real problems- What the "messy operations" of shipping production-grade software actually look like- Why culture and team energy are Anton's single biggest focus inside a hyper-growth company- The Nick Bostrom influence that set Anton's ten-year trajectory into AIAbout Anton Osika:Anton Osika is the co-founder and CEO of Lovable, the AI app builder that has enabled more than forty million products to be created by users with no engineering background. Before Lovable, Anton was CTO of an AI company and has spent the last decade building AI products, teams and culture. He holds a background in physics, is one of the most recognised voices in Europe on AI-empowered software creation, and is a leading European tech founder.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Anton Osika on LinkedIn: https://www.linkedin.com/in/antonosikaLovable: https://lovable.devHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #Lovable #AICoding #NoCode #VibeCoding #AIBuilder #AIApps #Founders #EuropeanTech #AIPodcast #GAEAAI
  • GAEA Talks

    #065 - Intelligence Is Becoming Infrastructure with Radiant President Mahdi Yahya

    28/04/2026 | 1h 14 mins.
    This week on GAEA Talks, Graeme Scott sits down with Mahdi Yahya - co-founder and president of Radiant, founder and former CEO of Ori, and one of the most original founder voices in the world on AI infrastructure, sovereign compute, and the backbone of the AI economy.Mahdi has spent twenty years building companies at the intersection of technology, infrastructure and the arts. He fled Lebanon during the 2006 war at nineteen, arrived in London with no degree, and built his first company in data centre networking. He then enrolled at the Drama Centre London for his BA, founded an experimental arts and technology gallery called Room One that produced theatre and virtual reality work with the National Theatre and Damon Albarn, and partnered with Ericsson on the breakthroughs that helped lay the foundations for edge computing. He spent eight years building Ori into a global AI cloud platform, which earlier this year merged with Brookfield's Radiant in a deal valuing the combined business at one point three billion dollars. Radiant is now the first vertically integrated sovereign AI infrastructure company in the world, backed by Brookfield's ten billion dollar AI Infrastructure Fund, with plans to build and acquire up to one hundred billion dollars of AI infrastructure worldwide.In this episode, Mahdi argues that intelligence is becoming infrastructure - the next civilisational utility after fire, steam, electricity and oil. He explains why every serious country is now treating sovereign AI as critical national infrastructure, why the world is currently spinning up something equivalent to a new supercomputer almost every week, and why the data your AI generates is more valuable, and more dangerous, than the data you feed it. He warns that shadow AI is already inside almost every enterprise, that the unified output of AI risks flattening human individuality, and that agency is the one trait that will distinguish the people who thrive in the AI era from those who do not.What you'll take away from this conversation:• The "intelligence is infrastructure" thesis - why AI joins fire, steam, electricity and oil as the next civilisational utility• Why we are now spinning up a new supercomputer almost every week globally• The Brookfield, Ori and Radiant story - how an eight year founder bet became a one point three billion dollar combined company• The case for sovereign AI - why countries cannot afford to give the keys to their intelligence infrastructure to other nations• Why the data AI generates inside your business is more valuable, and more dangerous, than the data you give it• Shadow AI inside enterprises - and what business leaders should prioritise in the next twelve to eighteen months• Why most existing private cloud and on-prem data centres physically cannot run modern AI workloads• Liquid cooling, power density and gigawatt data centres - the unglamorous reality that will decide which countries can host serious AI• Why the user interface of the digital world is about to shift from screens and apps to a sovereign AI layer in front of everything• The Lebanon to London story, and why drama school turned out to be the best founder training Mahdi could have chosen• The Shakespeare problem - how unified AI output threatens individuality, and why agency becomes the biggest differentiator between humans• Why "observational intelligence" is the next layer the AI stack will need• Why intelligence will become a metered utility, accessed by every person in the world, within our lifetime
  • GAEA Talks

    #070 - Building General-Purpose Robot Brains with Field AI CEO Dr Ali Agha

    27/04/2026 | 41 mins.
    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Dr Ali Agha - co-founder and CEO of Field AI, former NASA JPL principal investigator on two of the most ambitious DARPA robotics challenges in history, and one of the leading researchers in the world on risk-aware autonomy.Ali has spent almost two decades building AI for robots. He started with rescue robots and robotics competitions, met his co-founder at MIT, and went on to work at Qualcomm and then NASA JPL, where for seven to eight years he was a principal investigator on two DARPA grand challenges that the global robotics community treats as a holy grail. He and his co-founder realised that deployable robotics and foundation models had become two separate worlds, and that putting them together was the only path to a robot brain that could generalise across environments while staying safe. That insight became Field AI, now running in production on three continents across humanoid, legged, wheeled, drone and heavy-duty platforms.In this episode, recorded live at HumanX 2026 in San Francisco, Ali explains why data alone cannot produce safe physical AI, why architectural innovation and risk awareness are the non-negotiable second half of the equation, and why his team intentionally decoupled the dynamics of the robot body from the world model.What you'll take away from this conversation:- Why the commoditisation of robot hardware is the hidden unlock behind the physical AI boom- The real difference between conversational AI and physical AI - and why "ninety nine percent" is not good enough for a flying machine- Why Field AI separates world model from embodiment - and how that lets one brain run on tens of different platforms- The belief world model - what it is, why it is probabilistic, and why it is physics-aware- Why end-to-end neural network robotics is a debugging nightmare - and why Field AI refused to take that path- How adding a new robot to a fleet creates "ninety-nine new links" of shared learning, not just one extra unit- Why the risk-aware architecture is the reason Field AI can deploy on live construction sites changing minute to minute- Why edge compute, thermal cameras, lidar and event cameras all matter when the lights go out in an industrial setting- The labour shortage, aging population and climate-driven migration numbers reshaping robotics demand- The real construction job statistic - forty thousand injuries and a thousand deaths per year in the US alone- Why the future of robotics is less "Terminator" and more "capacity multiplier for humans"About Dr Ali Agha:Dr Ali Agha is the co-founder and CEO of Field AI, which builds the world's first field-deployable, general-purpose robot brain. He spent seven to eight years at NASA Jet Propulsion Laboratory (JPL), where he was a principal investigator on two of the most recent DARPA robotics challenges, and previously held research roles at Qualcomm after completing his PhD in electrical and computer engineering. Field AI is now live in production across three continents.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Dr Ali Agha on LinkedIn: https://www.linkedin.com/in/aliaghaField AI: https://fieldai.comHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #PhysicalAI #Robotics #FieldAI #AIRobots #RobotBrain #Autonomy #EdgeAI #WorldModels #Humanoid #NASAJPL #DARPA #AIPodcast #GAEAAI
  • GAEA Talks

    #069 - The Multimodal Road to AGI with Luma AI COO Caroline Ingeborn

    24/04/2026 | 35 mins.
    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Caroline Ingeborn - COO of Luma AI, former CEO and co-founder of Leap, former CEO, President and COO of Toca Boca, and one of the most experienced operators in the world of creative technology.Caroline's career has been spent at the crossroads of technology, creativity and product leadership. She helped build Toca Boca into one of the world's most loved kids' creative software companies, co-founded Leap, and is now COO of Luma AI, the foundational AI research lab building multimodal general intelligence. Luma's thesis is that LLMs alone will not reach AGI - intelligence that can reason, operate and create alongside humans has to be unified across language, image, video, 3D and audio. Luma recently launched Uni 1, its first unified model trained jointly on image and language, and has built a product suite - Luma Agents and the Forward Deployed Creatives team - that turns those models into daily tools for the world's top creative professionals.In this episode, recorded live at HumanX 2026 in San Francisco, Caroline explains why the research community's decision to plumb modalities together is now being replaced with truly unified models, what is really happening inside the Dream Brief collaboration with Diane that submitted twenty one AI-generated finalists to Cannes Lions, and why the real story of 2026 is not that AI is replacing creatives - it is that twenty and thirty-year career creatives are now using AI as a creative collaborator.What you'll take away from this conversation:
    Why LLMs alone cannot get us to AGI - and what a unified model really looks like
    Inside Uni 1 - Luma's first jointly trained image and language model - and why it matters for the path to AGI
    The two shifts happening right now in creative AI - and why they are compounding
    Why no one needs to become a prompt engineer any more - and what takes its place
    Why the next decade belongs to people who have spent twenty or thirty years in the creative industries
    The Dream Brief story - seven hundred AI-generated ads, a million-dollar Cannes Lions prize, and what it proved
    The "creative process is non-linear now" realisation - and what that does to agency economics
    Why Luma's researchers work shoulder-to-shoulder with in-house creatives - and the feedback loop that creates
    How the local car dealership example explains where brand marketing is really heading
    Why the "back to the Future with a different lead actor" example is the perfect lens on AI and risk
    The cultural humility problem with foundation models - and why Luma takes it seriously
    The dreaming across modalities analogy - and why it is the simplest explanation of why multimodal matters
    About Caroline Ingeborn:Caroline Ingeborn is the COO of Luma AI, the foundational research lab and product company building multimodal generative intelligence for creative work. She was previously co-founder and CEO of Leap, and before that CEO, President and COO of Toca Boca, one of the most successful kids' creative technology companies ever built. She is a board member, advisor, investor and entrepreneur-in-residence at several leading technology companies.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Caroline Ingeborn on LinkedIn:   / ingeborn  Luma AI: https://lumalabs.aiHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #LumaAI #Multimodal #AGI #CreativeAI #AIVideo #AIAgents #DreamMachine #CannesLions #AIPodcast #GAEAAI
  • GAEA Talks

    #068 - The Open Source Engine Powering AI with Anyscale's Robert Nishihara

    21/04/2026 | 55 mins.
    This week on GAEA Talks, Graeme Scott sits down with Robert Nishihara - co-founder of Anyscale, creator of the open source Ray project, UC Berkeley PhD in machine learning and distributed systems, Harvard mathematics graduate, and one of the architects of the software infrastructure powering AI at OpenAI, Amazon, Cohere, Hugging Face, NVIDIA, Uber, Spotify and Visa.Robert's journey is the story of how modern AI is actually built. As a PhD student at UC Berkeley working with Michael Jordan and Ion Stoica, he and his co-founders kept hitting the same wall - they wanted to do research on algorithms but ended up spending all their time on distributed systems just to run their experiments. That frustration became Ray, the open source compute framework they built to make distributed AI accessible. In 2019 they founded Anyscale to commercialise Ray, and today it powers mission-critical AI workloads at many of the largest AI companies on earth.In this episode, recorded live at HumanX 2026 in San Francisco, Robert takes us inside the real engineering reality behind the AI boom - from the mindset shift that "the code is not the artifact" to the quiet revolution in data curation that has replaced architecture innovation as the frontier of model quality. He explains why the thirty-year lag from demo to production still haunts robotics and AI, why every serious AI company now runs across hyperscalers and neoclouds to scrounge for capacity, how teams manage rack-level GPU failures with "bad GPU" lists and suspected-bad lists, and why learning outside the model - through context engineering - may matter as much as training itself. This is essential listening for anyone building, funding, or betting on the infrastructure that will decide the next phase of AI.What you'll take away from this conversation:- The "code is not the artifact" mindset shift - why AI research code can be throwaway because the model, not the software, is the real deliverable- Why the thirty-year gap from demo to production is the defining challenge of AI reliability - and why autonomous driving is the canonical example- How data curation and synthetic data generation have quietly replaced architectures and optimisers as the true frontier of model quality- Why reinforcement learning is the next scaling frontier - data efficient, compute hungry, and a way to keep scaling when labelled data plateaus- Why the next leap in intelligence will come from learning outside the model - context engineering, mental models, and closing the reasoning-to-learning loop- The hardware reality no one talks about - 72-GPU racks, long-tail failure rates, and the scheduling gymnastics required to run unreliable hardware reliably- The "bad GPU" and "suspected-bad GPU" lists production teams actually maintain to keep training jobs alive- Why every serious AI team now runs across a hyperscaler and one or more neoclouds - and why advertised cloud capacity is effectively fiction- Why training and inference must share compute - statically partitioning your cluster is a cost trap that hits you at peak inference demand- Why text is a minuscule fraction of the world's data - and the shift from SQL on tabular data to inference on arbitrary data types will happen fast- Why the infrastructure team has to optimise for performance, cost AND researcher productivity - and why velocity is often what separates winners from losers- Robert's two biggest bets for the next wave of AI - compute-driven data generation, and systems that learn outside the model weights

More Technology podcasts

About GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove
Podcast website

Listen to GAEA Talks, Lenny's Podcast: Product | Career | Growth and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.14| © 2007-2026 radio.de GmbH
Generated: 5/4/2026 - 11:09:34 PM