
137. New Season. New Format. Same AI Mayhem?
18/10/2025 | 54 mins.
We’re back after a long summer break with a new format, live recording, and a whole lot to say about the state of AI, tech, and creativity.In this episode, David, Jo, and Lena dig into:Google’s decision to limit search results to 10: What does this mean for small businesses, SEO, and information access?Is AI making us lazy, dumber, or just revealing how unoriginal we’ve always been?The looming AI bubble: Are we headed for a dot-com-style crash?Creativity vs. automation: Is originality dead, or just evolving?Also in the mix:David’s ChatGPT agent that refuses to do its jobJo’s trust issues with AI trip plannersLena’s sharp take on Gen Z and critical thinkingAnd a detour into quantum computing, fusion power, and alien techThis one’s equal parts conversation, curiosity, and mild chaos — just how we like it.Subscribe, share, and stay tuned.New season. New format. Same mayhem.

136. Marco Ramilli: Understanding AI: The Importance of Detecting Fake Content
04/7/2025 | 33 mins.
Marco Ramilli joins us to discuss the urgent need for technology that can identify whether images and videos have been generated by artificial intelligence. He shares that the idea for his software arose from a viral image of the Pope in a designer jacket, which sparked widespread debate and confusion over its authenticity. As the digital landscape becomes increasingly cluttered with manipulated content, Marco emphasizes the critical importance of distinguishing reality from fabrication. He explains how his software leverages advanced AI models to analyze visual content and determine its origins. This conversation sheds light on the broader implications of AI-generated media and the challenges we face in maintaining trust in what we see online.Takeaways: Marco Ramilli discusses the importance of distinguishing real images from AI-generated content, especially in today's digital world. He shares how the viral fake image of the Pope in a puffer jacket inspired him to develop software for identifying AI-generated media. The technology developed by Marco can analyze photos, videos, and sounds to determine their authenticity, which is crucial for preventing misinformation. Marco emphasizes that the responsibility lies with technology developers to incorporate safeguards against misuse of AI-generated content. He notes that the rise of fake content can dilute public trust and complicate issues surrounding information verification in society. Marco believes that collaboration among companies is essential to address the challenges posed by the proliferation of AI-generated media.

135. Tim Carter & Simon Mirren: AI’s Missing Soul: Who’s Really Telling the Story?
20/6/2025 | 1h 6 mins.
In this episode of Creatives WithAI, Lena Robinson and David Brown are joined by Tim Carter (CEO) and Simon Mirren (Creative Officer) of Karmanline, a newly launched company focused on integrating AI into the content production industry. Together, they dive into the provocations and potential of AI in storytelling and content creation, from the philosophical to the practical.Simon brings his decades of experience as a showrunner and storyteller (Criminal Minds, Versailles etc.) to interrogate whether machines can ever grasp the soul of a narrative. Tim, with a background in IP, tech, and ethics, unpacks how generative tools can (and should) be leveraged across the production pipeline without sidelining the deep craft and collaboration that makes filmmaking human.From fake AI startups and the dangers of anthropomorphising machines, to the creative chaos worth protecting in an increasingly optimised world, this episode is a must-listen for anyone working in, adjacent to, or even worried about AI’s influence on the future of media.Takeaways:AI Creativity: A machine might generate content, but it can’t understand tension, soul, or satire. That still belongs to humans, at least for now.Middle Ground Disruption: AI is widening the talent pool, but in doing so, it’s making life harder for average-skilled professionals.Human-Centric Storytelling Matters: Technology can support storytelling, but it shouldn’t overwrite the stories of marginalised voices.Collective Craft is Sacred: Every role on a film set, from grips to carpenters, holds meaning. Disregarding that in pursuit of “efficiency” is both arrogant and shortsighted.Let’s Talk Back: The episode challenges us to stay involved, speak up, and resist the sanitisation of creativity through algorithmic convenience.( PS – We want to hear from you! Got a question for Lena and Dave to tackle in a future episode? Drop it in the comments on our socials and we might feature it on the show.)Find Tim and Simon Online:Tim Carter (CEO, Karmanline) – LinkedInSimon Mirren (Creative Officer, Karmanline; Showrunner (Criminal Minds, Versailles etc) LinkedInKarmanlineNews article Links referenced in this episode:1st News Article Tim Mentioned: I tested Google's VEO 3 Myself: Here's what they don't show you in the keynote2nd News Article Tim Mentioned: Video of Emily M Bender & Sébastien Bubeck at The Computer History MuseumArticle Lena Mentioned: 'Nobody wants a robot to read them a story!' The creatives and academics rejecting AI - at work and at home.Mentioned by Lena:

134. Dr. Sonia Tiwari: Why AI Characters Need Empathy and Boundaries
10/6/2025 | 59 mins.
Dr. Sonia Tiwari joins Iyabo Oba on Relationships WithAI to explore how her work in design, education, and character creation intersects with AI, particularly in emotionally safe and ethical ways. Sonia shares how AI characters can foster learning, how her personal journey shaped her approach, and why foundational skills matter in AI collaboration. The conversation delves into topics like dual empathy, the dangers of parasocial AI relationships, and the mental health chatbot she created, Limona. Sonia calls for thoughtful design, cultural awareness, and clear guardrails to ensure AI supports rather than harms, especially in children’s lives.Top Three Takeaways:Design and Empathy Matter in AI - AI characters that feel relatable and emotionally safe can support learning and mental health, but their design must include ethical safeguards and clear limits.Foundational Skills Are Crucial - AI tools amplify existing expertise—they don’t replace it. Educators and designers with real-world experience use AI more responsibly and creatively.Guardrails Must Be Built In - Effective AI literacy and child safety require action on three levels: law, design, and culture. Without all three, AI can become emotionally manipulative or unsafe.Links and ReferencesLimona chatbot – Sonia’s CBT-based AI support toolDaniel Tiger’s Neighborhood and Mr. Rogers’ Neighborhood – character-led emotional learningBuddy.ai – AI tutor for kidsEveryone AI – nonprofit working on AI and child safetyCBT overview – understanding cognitive behavioural therapyRed teaming – stress-testing AI for safety flaws

133. Rola Aina: Why Emotionally Intelligent Leaders Will Win With AI
27/5/2025 | 57 mins.
In this episode of Relationships WithAI, Iyabo Oba sits down with tech transformation consultant and TurnTroop founder Rola Aina for a wide-ranging conversation on leadership, purpose, and building with AI.Rola shares how her faith and upbringing shape her mission to make AI adoption both ethical and inclusive. She explains how TurnTroop is using African talent to help businesses implement AI responsibly, creating social impact while solving real enterprise problems. They discuss why generosity is not a soft skill but a strategic one, and how emotionally intelligent leadership can slow things down to build faster, fairer systems.Rola also reflects on her use of AI tools like ChatGPT and Claude, why nuance and judgment still belong to humans, and how real connection and kindness must remain at the heart of how we build and lead in an AI-driven world.Takeaways AI as a Tool for Equity and Empowerment: Rola sees AI as a powerful tool to level the global playing field, particularly through her startup TurnTroop, which helps businesses adopt AI responsibly while building talent pipelines in Africa. She believes Africa doesn’t need saviours or more charities—it needs CEOs and commerce rooted in dignity and purpose. Leadership Grounded in Purpose, Generosity, and Emotional Intelligence: Rola champions emotionally intelligent leadership, rejecting the “move fast and break things” culture. She promotes slowing down to reflect, empowering teams, and building systems that include everyone. Her values of generosity and purpose shape how she leads, builds tools, and envisions ethical AI. Human Connection Must Remain Central in an AI-Driven World: While Rola utilises AI tools like ChatGPT and Claude as “chiefs of staff,” she emphasises their limitations, particularly in terms of nuance, judgment, and emotional presence. She urges founders and leaders to stay human, stay kind, and stay emotionally connected, especially in distributed teams.Linkshttps://www.turntroop.ai/https://www.linkedin.com/in/rola-aina/



WithAI FM™