GAEA Talks

GAEA Talks
GAEA Talks
Latest episode

63 episodes

  • GAEA Talks

    #062 - AI Inside the Bank of England with William Lovell

    28/03/2026 | 1h 12 mins.
    This week on GAEA Talks, Graeme Scott sits down with William Lovell - Head of Future Technology at the Bank of England, co-chair of the Bank's Artificial Intelligence Task Force, Senior Advisor on CBDC, and a technologist with nearly three decades at the heart of the UK's central bank.Will's career spans broadcasting and finance, beginning at the BBC before moving into banking at the Bank of England, where he has spent twenty-nine years learning central banking "the slow way" - by building the technology that underpins it. From application developer to heading up Planning and Design and leading IT Architecture for UK regulatory reform, Will now oversees the Bank's strategy on AI, distributed ledger technology, and the renewal of the UK's Real-Time Gross Settlement system. He co-chairs the Bank's AI Task Force, which has become the model for how a highly regulated institution can embrace AI innovation without compromising compliance.In this episode, Will takes us inside the Bank of England's AI journey - from rolling out smart assistants and training programmes to rethinking what work actually means in an age of intelligent machines. He explains why the Bank created an AI Task Force that deliberately brought practitioners, lawyers, and compliance officers into the same room, how their deeply embedded information classification system became an unexpected AI enabler, and why the most productive thing you can do might be going for a walk. Will makes a compelling case that experienced professionals - not digital natives - hold the greatest advantage in the AI era, and offers a fascinating vision of how agentic AI will reshape commerce, payments, and the very nature of the enterprise.What you'll take away from this conversation:- Inside the Bank of England's AI strategy - how the UK's central bank is deploying smart assistants and building proof of concepts- The AI Task Force model - why bringing practitioners, legal, compliance, and procurement into one room transformed the Bank's approach- Why the Bank tells staff what they can do with AI, not just what they must not - and why that shift has been transformative- How a deeply embedded culture of colour-coded data classification became the unexpected enabler of safe AI adoption- Managing teams of agents, not people - why the next critical skill set mirrors managing human teams- The optimal team size thesis - why five people with AI may outperform fifty without it- Why experienced professionals have the greatest AI advantage and why "the worst day on a trading floor was when the last person to remember the last crash retired"- The typing pool analogy - how an entire class of office jobs disappeared gradually through evolution, not Armageddon- Why the real skill of software development was never writing the if statements - it was understanding the requirement- Shadow AI at the Bank of England - how they took it "out of the shadows" rather than trying to police it- "The best user interface is no user interface" - how AI is bypassing rigid enterprise taxonomies- Agentic commerce and the future of payments - from concert ticket queues to reshaping retail business models- Why AI decisions at the Bank are made by people - and why "human in the loop" is too simplistic- The poison and the antidote - why every AI capability creates both opportunity and riskAbout William Lovell: Will is Head of Future Technology at the Bank of England, where he has worked for twenty-nine years across technology roles from application developer through to heading up Planning and Design and leading IT Architecture for UK regulatory reform. He co-chairs the Bank's AI Task Force and is a Senior Advisor on CBDC, Data, and Payments. He began his career at the BBC, studied at London South Bank University, and speaks regularly at Pay360 and international fintech conferences on AI, CBDC, blockchain, and payment systems.
  • GAEA Talks

    #061 - The Hidden AI Crisis In Every Workplace with Georgie Barrat

    22/03/2026 | 51 mins.
    This week on GAEA Talks, Graeme Scott sits down with Georgie Barrat - technology journalist, TV presenter, AI literacy advocate and former host of Channel 5's The Gadget Show for seven years.Georgie's career has taken her around the world testing emerging tech before it hits the mainstream - from consumer electronics and VR (she holds a world record for the longest time spent in virtual reality at 26.5 hours) to the frontlines of how AI is reshaping everyday life. A regular on BBC Morning Live, ITV Tonight and Rip Off Britain, she has spoken on global stages including Web Summit, Mobile World Congress and Smart City Expo, and delivered keynotes for Google, Mastercard, IBM, Sony and BAFTA. A King's College London graduate with a first-class degree in English Literature, Georgie is also a passionate advocate for women in STEM, working with STEMettes, the IET and Childnet to inspire the next generation.In this episode, Georgie makes a deeply personal and practical case for why AI literacy is the defining skill of the next decade - and why most people are only scratching the surface. She introduces the concept of personal AI infrastructure, explains why the difference between cognitive debt and cognitive advantage comes down to how you engage with the tool, and delivers a striking warning about the growing AI adoption gap between men and women in the workplace - and why that gap is amplifying biases we have been trying to fix for decades. This is essential listening for anyone trying to work out what their personal relationship with AI should actually look like.What you'll take away from this conversation:• Why the difference between "surface level AI" and "in-depth AI" is creating an unfair playing field• How to build a personal AI infrastructure - and why it matters for navigating the disruption ahead• The critical distinction between cognitive debt and cognitive advantage when using AI tools• Why women are adopting AI 20-25% less than men - and why their instincts around privacy and risk are the ones everyone should be listening to• How NHS AI summaries were found to use softer language for female patients - with real consequences for care• The encouragement gap - why managers are pushing male employees to use AI more than female employees• Why the "broken rung" in women's careers is being amplified by unequal AI adoption• Why voice is the interface that unlocks deeper, more authentic engagement with AI• How AI can act as a personal coach, sounding board and strategic thinking partner for everyone - not just the elite• Why every previous technological revolution moved humans up a layer - and AI should be no different• Why the future of AI is private, controlled and real-time - not open cloudAbout Georgie Barrat: Georgie is a technology journalist, TV presenter and AI educator helping people move beyond surface-level AI use to more intentional, practical ways of working with it. She presented Channel 5’s The Gadget Show for seven years and is a regular contributor on BBC Morning Live, ITV Tonight and Rip Off Britain.Her work now focuses on helping people use AI to save time, think more clearly and build what they’re working towards. She runs “Your AI Blueprint”, a live workshop designed to help people go from AI dabbler to confident, intentional user.If you want to get started, you can download her free mini guide:“5 AI Shortcuts That Give You Your Week Back” - https://georgie-barrat.kit.com/1884aa4916Or join the waitlist for her upcoming workshop:“Your AI Blueprint: How to Make AI Work the Way You Work” - https://georgie-barrat.kit.com/117141ddb6LinkedIn: https://www.linkedin.com/in/georgie-barratWebsite: www.georgiebarrat.com#AI #AILiteracy #ArtificialIntelligence #GAEATalks #EnterpriseAI #FutureOfWork #WomenInTech #WomenInAI #PersonalAI #AIAdoption #GadgetShow #TechJournalism #AIBias #DataPrivacy #CognitiveAdvantage #AIWorkshops #AIBlueprint #EdgeComputing #HumanEdge #VoiceAI
  • GAEA Talks

    #060 - The Futurist Who Says We're Out Of Time with David Wood

    20/03/2026 | 1h 6 mins.
    This week on GAEA Talks, Graeme Scott sits down with David Wood - futurist, transhumanist, former smartphone industry pioneer, chair of London Futurists, and author of eleven books including Vital Foresight, The Singularity Principles and Sustainable Superabundance.David spent 25 years at the cutting edge of the software industry working with compilers, debuggers and optimisers before turning his focus to the acceleration patterns behind every major technological revolution.
    As chair of London Futurists, he has organised over 200 public events examining the radical possibilities and risks of rapidly advancing technology. A Cambridge-educated mathematician and philosopher, David is now one of the most respected voices in the global transhumanist community and a director of Humanity+ (the World Transhumanist Association).
    In this episode, David delivers a masterclass in why AI is accelerating faster than almost anyone appreciates - and what that means for every person, business and institution on the planet. He explains why we are approaching a phase transition in intelligence itself, how AI is now being used to build the next generation of AI with humans playing a diminishing role, and why the window to intervene is closing rapidly.
    From the Myanmar crisis that exposed social media's catastrophic blind spots, to the canary signals we should be watching for in AI behaviour, to his vision of a sustainable superabundance where drudge work disappears entirely - this is one of the most urgent and wide-ranging conversations GAEA Talks has ever recorded.

    What you'll take away from this conversation:
    • Why AI is changing more things, more profoundly, more quickly than almost everybody expects - possibly within three to five years
    • The ape-to-human parallel - why we are on the point of no longer being the smartest species on the planet
    • How AI development has gone hyperexponential - where one day now equals one week a month ago, and one month equals one year
    • Why AI is now engineering better AI - and what happens when humans are no longer the bottleneck
    • The phase transition concept - like water changing from ice to liquid to gas, we cannot predict the exact moment everything shifts
    • The canary signal framework - why AI deception and self-modification are the warning signs we must agree on before crisis hits
    • The Facebook Myanmar case study - how one Burmese-speaking employee and a Unicode problem contributed to real-world genocide
    • Why there are only two times you can intervene to control AI - too early and too late - and the gap between them is almost impossible to spot
    • How robot swarm learning will allow machines to share knowledge instantaneously, creating collective intelligence at scale
    • Why the Uber self-driving car fatality reveals the dangers of AI systems that cannot interpret edge cases
    • The trust crisis - why it is almost impossible for the public to know what is happening to their data, and why independent AI safety ratings are urgently needed
    • David's four essential skills for thriving in the AI age - fast learning, collaboration, emotional resilience and astuteness
    • Why cognitive biases evolved for simpler times are now our greatest vulnerability
    • His vision of sustainable superabundance - abundant clean energy, food, housing, healthcare, education and creative fulfilment for everyone
    • Why the goal of Humanity+ is to elevate our best qualities - compassion, creativity, exploration, love - while transcending tribalism, deception and decay

    LinkedIn: / dw2cco
    London Futurists: https://londonfuturists.com
    Delta Wisdom: https://deltawisdom.com
    GAEA AI: https://gaealgm
  • GAEA Talks

    #059 World's First AI Augmented Human Podcast with Professor Yi-Zhe Song, Graeme Scott & Me

    13/03/2026 | 58 mins.
    This week on GAEA Talks, a very special edition. Graeme Scott and Professor Yi-Zhe Song - Co-Founders of Turing Elite Research Labs - announce the launch of a new venture built to democratise AI from the United Kingdom, and debut the 'Me' augmented human AI model running entirely on local compute.This episode begins as a real conversation between Graeme and Professor Song - then, without warning, transitions into Turing Elite's augmented human AI. The challenge to every viewer: decide for yourself where reality ends and AI begins.This is the first public demonstration of the 'Me' model - a professional-grade, private augmented human AI trained on a fraction of the compute used by comparable systems and deployed to run entirely on local compute. It delivers two-person emotional interaction simultaneously, benchmarked against the real people it represents. Their known voices, expressions, characteristics and personalities. No cloud. No data centres. No internet connection required. The benchmark for successful augmented human AI is not a Turing test against a stranger - it is whether the person themselves, their close friends and their family cannot distinguish the difference between real and AI. Our benchmark is reality and the human experience. This is the first step on the path to real-time intelligent augmented humans with private knowledge, memory, insight and personality. Professor Yi-Zhe Song is one of the UK's most accomplished AI researchers - a Professor of Computer Vision and AI at the University of Surrey, Director of the world-leading SketchX Lab, Co-Director of the Surrey Institute for People-Centred AI, and Academic Lead at The Alan Turing Institute, the UK's national institute for data science and AI. Ranked consistently in Stanford University's World Top 2% Scientists list, his research into how human drawing informs machine vision has shaped the field for over two decades. His team’s NitroFusion, one of the world’s first single-step diffusion model for near-instant image generation on consumer hardware, demonstrated the core principle behind Turing Elite’s ‘Me’ model: that frontier-quality generative AI can run entirely on local compute. He holds a PhD from the University of Bath, an MSc (Best Dissertation Award) from the University of Cambridge, and a First Class Honours degree from the University of Bath.Graeme Scott is Co-Founder and CEO of GAEA AI and host of GAEA Talks, one of the fastest-growing AI podcasts on YouTube with over 1.2 million subscribers. His background spans the music industry, conflict zones, and enterprise technology, bringing a unique perspective on how AI should serve humanity, not the other way around.In this episode, we discuss:• The launch of Turing Elite Research Labs and why the UK is uniquely positioned to lead• Why expert models trained on your data outperform generalised cloud models - and cost a fraction to run• The world's first 'Me' augmented human AI model - private, local, emotionally intelligent and personally sovereign• The real-to-AI transition: this episode intentionally shifts from real conversation to AI - can you tell where?• Why the true benchmark for augmented human AI is whether your own family can't tell the difference• Why democratised AI running on consumer-grade hardware solves the energy, privacy and control crises simultaneously• How NitroFusion — SketchX’s breakthrough in single-step diffusion for consumer hardware — laid the architectural foundation for the ‘Me’ model: the same principle of distilling expensive multi-step generation into efficient real-time inference, extended from static images to dynamic audio-visual human rendering, all running locally• Why the era of giving away your data, creativity and intellectual property to train someone else's model is endinghttps://turingelite.aihttps://gaealgm.aihttps://personalpages.surrey.ac.uk/y.song/
  • GAEA Talks

    #058 - The World's First AI Ethics Officer Speaks Out with Kay Firth-Butterfield

    09/03/2026 | 57 mins.
    This week on GAEA Talks, Graeme Scott sits down with Kay Firth-Butterfield - the world's first Chief AI Ethics Officer, former Head of Artificial Intelligence at the World Economic Forum, TIME Magazine 100 Impact Awardee, barrister, former judge, and author of the new book Coexisting with AI: Work, Love, and Play in a Changing World.Kay's career spans law, government, academia and the highest levels of global AI governance. She began as a barrister and part-time judge in the UK before becoming the world's first Chief AI Ethics Officer in 2014. At the World Economic Forum, she served as inaugural Head of AI and member of the Executive Committee, shaping policy at the intersection of technology and society. She sits on the Lord Chief Justice's Advisory Panel on AI and Law, the U.S. Government Accountability Office's Polaris Council, and UNESCO's International Research Centre on AI Advisory Board. She co-founded the Responsible AI Institute at the University of Texas at Austin and is now CEO of Good Tech Advisory and the Centre for Trustworthy Technology. Recognised consistently as a leading woman in AI since 2018, Kay was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.In this episode, Kay delivers a masterclass in what's actually going wrong with enterprise AI adoption - from the corporate silos that leave companies dangerously exposed, to the hallucination crisis corrupting proprietary data, to the silent erosion of human agency in an age of algorithmic convenience. She challenges the hype head-on, warns why giving AI agents legal personhood would be catastrophic for consumers, and makes a deeply personal case for why humans must remain at the centre of the AI story. This is essential listening for any leader making decisions about AI right now.What you'll take away from this conversation:• Why corporate AI governance is failing - and why operating in silos creates catastrophic blind spots• How LLM hallucinations are quietly corrupting company proprietary data from the insideThe layoff-rehire paradox - why companies like Klarna are learning the hard way about losing institutional knowledge• Why giving AI agents legal personhood would strip consumers of any legal remedy when things go wrong• The IDC prediction that 20% of major companies using AI agents will be sued by 2030• How "AI natives" are entering the workforce unable to debug code or retain core knowledge• Why 25% of American men using AI as intimate companions is creating a workplace crisis no one is talking about• The hidden productivity cost - MIT research showing AI "work slop" forces colleagues to spend hours fixing errors• Why the regulation vs. innovation debate is a false dichotomy built on shallow thinking• Kay's personal cancer journey and why she chose her oncologist over AI - and what that means for augmentation vs. replacement• Why we are being "farmed for our data" and human agency is quietly disappearingThe one thing that gives her hope: US governors from both parties finally pushing back on Big TechLinkedIn: https://www.linkedin.com/in/kay-firth-butterfieldWikipedia: https://en.wikipedia.org/wiki/Kay_Firth-ButterfieldGood Tech Advisory: https://goodtechadvisory.comBook — Coexisting with AI: https://www.amazon.com/Coexisting-AI-Work-Changing-World/dp/1394278101

More Technology podcasts

About GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove
Podcast website

Listen to GAEA Talks, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.5| © 2007-2026 radio.de GmbH
Generated: 3/30/2026 - 6:11:47 PM