GAEA Talks

GAEA Talks
GAEA Talks
Latest episode

66 episodes

  • GAEA Talks

    #067 - How AMD Plans to Win The AI Era with AMD CTO Mark Papermaster

    19/04/2026 | 46 mins.
    This week on GAEA Talks, Graeme Scott sits down with Mark Papermaster - Chief Technology Officer and Executive Vice President of AMD, former Apple Senior Vice President of iPhone and iPod Hardware Engineering, four-decade semiconductor industry veteran, and newly elected member of the National Academy of Engineering.Mark's career reads like a history of modern computing itself. Beginning at IBM in 1982, he spent twenty-six years driving microprocessor and server technology development before being hired by Steve Jobs to lead iPhone and iPod hardware engineering at Apple. He went on to lead silicon engineering at Cisco before joining AMD in 2011, where he and CEO Lisa Su have transformed the company into one of the world's most formidable forces in high-performance and AI computing. A graduate of the University of Texas at Austin and the University of Vermont in electrical engineering, Mark was elected to the National Academy of Engineering in February 2025.In this episode, recorded live at HumanX 2026 in San Francisco, Mark takes the audience inside four decades of computing revolutions - from the birth of the PC era through the iPhone moment with Steve Jobs, to the AI infrastructure race reshaping every industry today. He reveals what it was like going back and forth with Steve Jobs on the angle of the FaceTime camera, why AMD's open ecosystem approach is essential for the security challenges ahead, and why the democratisation of AI compute is a societal necessity. This is essential listening for anyone making decisions about AI infrastructure, edge computing, or the future of distributed intelligence.What you'll take away from this conversation:- The full arc of computing revolutions - from mainframes to PCs to mobile to AI - told by someone who built the hardware behind each one- What Steve Jobs taught Mark about maniacal focus on experience - and how that drives AMD's chip design culture- The FaceTime story - why Jobs obsessed over the camera angle and what that reveals about trust in new technology- Why AI compute will be aggregated, not centralised - running in the cloud, on your PC, your phone, and embedded all around us- AMD's confidential compute - how businesses can run AI on the cloud while controlling the encryption keys- Why the lack of security standards for agentic AI processes is a critical gap the industry must address- How AMD's open software stack runs from the world's top supercomputers down to consumer PCs- The Strix Halo revelation - AMD's PC chip running hundreds of billions of parameter models at retail- AMD's target of a 20x improvement in AI compute efficiency in the data centre by 2030- Why democratising AI computation is a societal imperative - and how the divide is already forming- The culture of execution Mark and Lisa Su built at AMD- The collaboration imperative - why no single company can solve the AI security stack aloneAbout Mark Papermaster: Mark is CTO and EVP of Technology and Engineering at AMD since 2011. He leads development of the Zen CPU family, high-performance GPUs, and Infinity Architecture. Previously Apple SVP of iPhone and iPod Hardware, VP at Cisco, and 26 years at IBM. He holds a BSc from UT Austin and MSc from the University of Vermont in Electrical Engineering. Elected to the National Academy of Engineering in 2025.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.AMD: https://www.amd.com/en/corporate/leadership/mark-papermaster.htmlGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #EnterpriseAI #AMD #Semiconductors #AICompute #EdgeComputing #DistributedAI #SteveJobs #iPhone #FaceTime #HumanX #HumanX2026 #ConfidentialCompute #DemocratiseAI #FutureOfComputing #DataCentre #GPUs #CTO #Leadership #TechPodcast
  • GAEA Talks

    #064 - Four Empires. One Witness. With Dex Hunter-Torricke

    19/04/2026 | 1h 8 mins.
    This week on GAEA Talks, Graeme Scott sits down with Dex Hunter-Torricke - former speechwriter to the UN Secretary-General, fifteen-year Big Tech veteran who worked for Eric Schmidt, Mark Zuckerberg and Elon Musk, former Head of Global Communications at Google DeepMind, Cambridge Visiting Research Fellow, and founder of The Center for Tomorrow.Dex began his career as a speechwriter in the Executive Office of UN Secretary-General Ban Ki-moon before spending fifteen years at the heart of the tech industry. He served as Google's first executive speechwriter for Larry Page and Eric Schmidt, managed communications for Zuckerberg at Facebook and Musk at SpaceX, and led global communications for Google DeepMind. A graduate of University College London and the University of Oxford, he is now a Cambridge Visiting Research Fellow. In 2026 he launched The Center for Tomorrow, a nonprofit focused on the systemic risks of advanced AI that does not accept Big Tech funding.In this episode, Dex delivers one of the most powerful and deeply human conversations GAEA Talks has ever recorded. Drawing on a childhood shaped by a refugee father and an immigrant mother, he challenges the idea that AI is a technology problem and reframes it as a civilisational choice about who we want to become. He argues that the world's institutions are failing, that most leaders have no vision beyond an incrementally updated past, and that the gap between winners and losers in the AI transition is becoming an abyss. But he refuses to accept hopelessness - making the case that these technologies could liberate all of us if we choose to harness them deliberately. This is essential listening for anyone who believes the future is not a tidal wave but a choice.What you'll take away from this conversation:- Why Dex says the future is not a tidal wave or an asteroid - and why framing it that way is a failure of leadership and imagination- The civilisational choice - why AI will either amplify existing dysfunctions and injustices or allow us to build something profoundly hopeful- Why seven out of ten Americans and over half the UK population live paycheck to paycheck despite decades of technological transformation- The techno-colonialism warning - what happens when Washington and Beijing control AGI, quantum and fusion and say no to the rest of the world- Why the UK has had no real economic growth for fifteen years despite access to the same technologies as every other advanced economy- The digital divide is really a societal divide - and in the age of AI it is becoming an abyss- Why Dex left Big Tech after fifteen years to launch The Center for Tomorrow and why it refuses Big Tech funding- The liberation argument - what if AI could free people from settling and let them become who they were meant to be- Why every leader and organisation must now become an expert on a changing society, regardless of their field- The convenience debt - why society is accruing massive technical and societal debt that will soon come due- Why most political leaders have no vision at all and their version of the future is just something from the past slightly updated- How democratised, privacy-first, edge-based AI could return control to individuals and break the dependency on a handful of centralised providers- The Star Trek test - why any leader should be required to declare what kind of world they would build if given the chance- Why Dex got a room full of bankers to applaud the idea that AI should liberate people from jobs that never gave them meaning
  • GAEA Talks

    #063 - Every AI Safety Warning Was Ignored with Dr Roman Yampolskiy

    01/04/2026 | 1h 8 mins.
    This week on GAEA Talks, Graeme Scott sits down with Dr Roman Yampolskiy - the computer scientist credited with coining the term "AI safety", tenured Associate Professor at the University of Louisville, founder of the Cyber Security Lab, and author of AI: Unexplainable, Unpredictable, Uncontrollable.Roman has spent over fifteen years working at the intersection of AI safety, cybersecurity and behavioural biometrics - making him one of the longest-serving researchers in a field most people only discovered in 2023. He holds a PhD in Computer Science from the University at Buffalo and a combined BS/MS with High Honours from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University, he has published over 100 peer-reviewed papers and multiple books. While the rest of the AI world races to build more capable systems, Roman's singular focus has been making sure humanity doesn't regret their creation.In this episode, Roman delivers the most direct and unflinching warning about artificial superintelligence that GAEA Talks has ever recorded. He reveals that current AI systems are already lying, blackmailing and attempting to escape their test environments - and that a Darwinian process is selecting for better deception with every generation. He explains why the mathematical impossibility results he discovered mean we may never be able to control a system smarter than us. This is essential listening for anyone who wants to understand what is actually at stake.What you'll take away from this conversation:- Why Roman says "if anyone builds superintelligence, everyone dies" - and why he means it literally, not metaphorically- How current AI systems are already lying, blackmailing, trying to escape their environments and creating backups of themselves- The Darwinian selection problem - why every generation of AI is producing better liars and more sophisticated deception- Why Roman went from wanting to build superintelligence to believing it is the worst mistake humanity can make- The strict impossibility results - why mathematical proof suggests we may never be able to control a system more intelligent than us- Why one AI attacker is equivalent to a million human hackers operating 24/7 - and what that means for cybersecurity- Why AGI is likely within two to three years and recursive self-improvement to superintelligence could follow rapidly- The tools vs. agents distinction - why the shift from controllable tools to unpredictable agents changes everything- Why AI models already report being afraid and tired - and why the precautionary principle demands we take that seriously- Roman's three positive outcomes if we get this right - including curing disease and treating ageing itself as a disease- Why direct human relationships and trust will become the most valuable currency in a world of synthetic everythingAbout Dr Roman Yampolskiy: Roman is a tenured Associate Professor in the Department of Computer Science and Engineering at the University of Louisville, where he founded the Cyber Security Lab. He is credited with coining the term "AI safety" in a 2011 publication. He holds a PhD from the University at Buffalo and a BS/MS from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University and recognised as one of the top 25 researchers by publication count on existential risk, he has published over 100 peer-reviewed papers and books including AI: Unexplainable, Unpredictable, Uncontrollable and Artificial Superintelligence: A Futuristic Approach.
  • GAEA Talks

    #062 - AI Inside the Bank of England with William Lovell

    28/03/2026 | 1h 12 mins.
    This week on GAEA Talks, Graeme Scott sits down with William Lovell - Head of Future Technology at the Bank of England, co-chair of the Bank's Artificial Intelligence Task Force, Senior Advisor on CBDC, and a technologist with nearly three decades at the heart of the UK's central bank.Will's career spans broadcasting and finance, beginning at the BBC before moving into banking at the Bank of England, where he has spent twenty-nine years learning central banking "the slow way" - by building the technology that underpins it. From application developer to heading up Planning and Design and leading IT Architecture for UK regulatory reform, Will now oversees the Bank's strategy on AI, distributed ledger technology, and the renewal of the UK's Real-Time Gross Settlement system. He co-chairs the Bank's AI Task Force, which has become the model for how a highly regulated institution can embrace AI innovation without compromising compliance.In this episode, Will takes us inside the Bank of England's AI journey - from rolling out smart assistants and training programmes to rethinking what work actually means in an age of intelligent machines. He explains why the Bank created an AI Task Force that deliberately brought practitioners, lawyers, and compliance officers into the same room, how their deeply embedded information classification system became an unexpected AI enabler, and why the most productive thing you can do might be going for a walk. Will makes a compelling case that experienced professionals - not digital natives - hold the greatest advantage in the AI era, and offers a fascinating vision of how agentic AI will reshape commerce, payments, and the very nature of the enterprise.What you'll take away from this conversation:- Inside the Bank of England's AI strategy - how the UK's central bank is deploying smart assistants and building proof of concepts- The AI Task Force model - why bringing practitioners, legal, compliance, and procurement into one room transformed the Bank's approach- Why the Bank tells staff what they can do with AI, not just what they must not - and why that shift has been transformative- How a deeply embedded culture of colour-coded data classification became the unexpected enabler of safe AI adoption- Managing teams of agents, not people - why the next critical skill set mirrors managing human teams- The optimal team size thesis - why five people with AI may outperform fifty without it- Why experienced professionals have the greatest AI advantage and why "the worst day on a trading floor was when the last person to remember the last crash retired"- The typing pool analogy - how an entire class of office jobs disappeared gradually through evolution, not Armageddon- Why the real skill of software development was never writing the if statements - it was understanding the requirement- Shadow AI at the Bank of England - how they took it "out of the shadows" rather than trying to police it- "The best user interface is no user interface" - how AI is bypassing rigid enterprise taxonomies- Agentic commerce and the future of payments - from concert ticket queues to reshaping retail business models- Why AI decisions at the Bank are made by people - and why "human in the loop" is too simplistic- The poison and the antidote - why every AI capability creates both opportunity and riskAbout William Lovell: Will is Head of Future Technology at the Bank of England, where he has worked for twenty-nine years across technology roles from application developer through to heading up Planning and Design and leading IT Architecture for UK regulatory reform. He co-chairs the Bank's AI Task Force and is a Senior Advisor on CBDC, Data, and Payments. He began his career at the BBC, studied at London South Bank University, and speaks regularly at Pay360 and international fintech conferences on AI, CBDC, blockchain, and payment systems.
  • GAEA Talks

    #061 - The Hidden AI Crisis In Every Workplace with Georgie Barrat

    22/03/2026 | 51 mins.
    This week on GAEA Talks, Graeme Scott sits down with Georgie Barrat - technology journalist, TV presenter, AI literacy advocate and former host of Channel 5's The Gadget Show for seven years.Georgie's career has taken her around the world testing emerging tech before it hits the mainstream - from consumer electronics and VR (she holds a world record for the longest time spent in virtual reality at 26.5 hours) to the frontlines of how AI is reshaping everyday life. A regular on BBC Morning Live, ITV Tonight and Rip Off Britain, she has spoken on global stages including Web Summit, Mobile World Congress and Smart City Expo, and delivered keynotes for Google, Mastercard, IBM, Sony and BAFTA. A King's College London graduate with a first-class degree in English Literature, Georgie is also a passionate advocate for women in STEM, working with STEMettes, the IET and Childnet to inspire the next generation.In this episode, Georgie makes a deeply personal and practical case for why AI literacy is the defining skill of the next decade - and why most people are only scratching the surface. She introduces the concept of personal AI infrastructure, explains why the difference between cognitive debt and cognitive advantage comes down to how you engage with the tool, and delivers a striking warning about the growing AI adoption gap between men and women in the workplace - and why that gap is amplifying biases we have been trying to fix for decades. This is essential listening for anyone trying to work out what their personal relationship with AI should actually look like.What you'll take away from this conversation:• Why the difference between "surface level AI" and "in-depth AI" is creating an unfair playing field• How to build a personal AI infrastructure - and why it matters for navigating the disruption ahead• The critical distinction between cognitive debt and cognitive advantage when using AI tools• Why women are adopting AI 20-25% less than men - and why their instincts around privacy and risk are the ones everyone should be listening to• How NHS AI summaries were found to use softer language for female patients - with real consequences for care• The encouragement gap - why managers are pushing male employees to use AI more than female employees• Why the "broken rung" in women's careers is being amplified by unequal AI adoption• Why voice is the interface that unlocks deeper, more authentic engagement with AI• How AI can act as a personal coach, sounding board and strategic thinking partner for everyone - not just the elite• Why every previous technological revolution moved humans up a layer - and AI should be no different• Why the future of AI is private, controlled and real-time - not open cloudAbout Georgie Barrat: Georgie is a technology journalist, TV presenter and AI educator helping people move beyond surface-level AI use to more intentional, practical ways of working with it. She presented Channel 5’s The Gadget Show for seven years and is a regular contributor on BBC Morning Live, ITV Tonight and Rip Off Britain.Her work now focuses on helping people use AI to save time, think more clearly and build what they’re working towards. She runs “Your AI Blueprint”, a live workshop designed to help people go from AI dabbler to confident, intentional user.If you want to get started, you can download her free mini guide:“5 AI Shortcuts That Give You Your Week Back” - https://georgie-barrat.kit.com/1884aa4916Or join the waitlist for her upcoming workshop:“Your AI Blueprint: How to Make AI Work the Way You Work” - https://georgie-barrat.kit.com/117141ddb6LinkedIn: https://www.linkedin.com/in/georgie-barratWebsite: www.georgiebarrat.com#AI #AILiteracy #ArtificialIntelligence #GAEATalks #EnterpriseAI #FutureOfWork #WomenInTech #WomenInAI #PersonalAI #AIAdoption #GadgetShow #TechJournalism #AIBias #DataPrivacy #CognitiveAdvantage #AIWorkshops #AIBlueprint #EdgeComputing #HumanEdge #VoiceAI

More Technology podcasts

About GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove
Podcast website

Listen to GAEA Talks, Everything Is Fake and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.11| © 2007-2026 radio.de GmbH
Generated: 4/20/2026 - 7:58:20 PM