NVIDIA’s Kari Briski on How to Use NVIDIA Nemotron Open-Source AI
Learn how to use NVIDIA's Nemotron open-source AI models with VP Kari Briski. We cover what Nemotron is, minimum hardware specs, the difference between Nano/Super/Ultra tiers, when to choose local vs cloud AI, and practical deployment patterns for businesses. Perfect for anyone wanting to run powerful AI locally with full control and privacy.Resources mentioned:NVIDIA Nemotron Models: https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/Start prototyping for free: https://build.nvidia.com/explore/discoverSubscribe to The Neuron newsletter: https://theneuron.aiWatch more AI interviews: https://www.youtube.com/@TheNeuronAI
-------- Â
38:25
--------
38:25
AI vs Google Search....behind the scenes
AI search is fundamentally changing how people find information online, but it's also creating a Wild West of spam, manipulation, and brand impersonation. SEO expert Mark Williams-Cook joins us to discuss why he calls AI a "leaky bucket," how expired domains are gaming LLMs, and what the death of the link graph means for the future of search. We'll explore practical strategies for making your site visible to AI, the risks brands face from AI phishing, and whether SEO is truly dead or just evolving. Perfect for anyone who owns a website or runs a business.Subscribe to The Neuron newsletter: https://theneuron.aiGuest: Mark Williams-Cook - Director at Candour, Founder of AlsoAskedFind Mark on LinkedIn: https://www.linkedin.com/in/markseo Search with Candour podcast: https://withcandour.co.uk/podcastÂ
-------- Â
1:17:45
--------
1:17:45
AI Inference: Why Speed Matters More Than You Think (with SambaNova's Kwasi Ankomah)
Everyone's talking about the AI datacenter boom right now. Billion dollar deals here, hundred billion dollar deals there. Well, why do data centers matter? It turns out, AI inference (actually calling the AI and running it) is the hidden bottleneck slowing down every AI application you use (and new stuff yet to be released). In this episode, Kwasi Ankomah from SambaNova Systems explains why running AI models efficiently matters more than you think, how their revolutionary chip architecture delivers 700+ tokens per second, and why AI agents are about to make this problem 10x worse.💡 This episode is sponsored by Gladia's Solaria - the speech-to-text API built for real-world voice AI. With sub-270ms latency, 100+ languages supported, and 94% accuracy even in noisy environments, it's the backbone powering voice agents that actually work. Learn more at gladia.io/solaria🔗 Key Links:• SambaNova Cloud: https://cloud.sambanova.ai• Check out Solaria speech to text API: https://www.gladia.io/solaria• Subscribe to The Neuron newsletter: https://theneuron.ai🎯 What You'll Learn:• Why inference speed matters more than model size• How SambaNova runs massive models on 90% less power• Why AI agents use 10-20x more tokens• The best open source models right now• What to watch for in AI infrastructure➤ CHAPTERSTimecode - Chapter Title0:00 - Intro2:14 - What is AI Inference?3:19 - Why Inference is the Real Challenge9:18 - A message from our sponsor, Gladia Solaria10:16 - The 95% ROI Problem Discussion13:47 - SambaNova's Revolutionary Chip Architecture15:19 - Running DeepSeek's 670B Parameter Models18:11 - Developer Experience & Platform21:26 - AI Agents and the Token Explosion24:33 - Model Swapping and Cost Optimization31:30 - Energy Efficiency 10kW vs 100kW36:13 - Future of AI Models Bigger vs Smaller39:24 - Best Open Source Models Right Now46:01 - AI Infrastructure Next 12 Months47:09 - Agents as Infrastructure50:28 - Human-in-the-Loop and Trust52:55 - Closing and ResourcesArticle Written by: Grant HarveyHosted by: Corey Noles and Grant HarveyGuest: Kwasi AnkomahPublished by: Manique SantosEdited by: Adrian Vallinan
-------- Â
53:19
--------
53:19
First 48 Hours With Sora 2: The Good, The Bizarre, and Sam Altman
In this special hands-on episode, Corey Noles and Grant Harvey dive into OpenAI's Sora 2 - the AI video platform that's part TikTok, part meme generator, and 100% chaos. Watch as they navigate the new social media-style interface, create ridiculous videos featuring Sam Altman at a Berlin techno rave filled with clowns, and discover why Sam has become the "Tom from MySpace" of AI-generated content.The hosts explore Sora 2's key features including the viral "cameo" system that lets you loan your likeness to other creators, the remix functionality, and the surprisingly robust prompt editing capabilities. They demonstrate the platform's strengths (incredibly fast generation, social features, creative possibilities) and weaknesses (no timeline editor for scrubbing through footage, occasional voice mismatches, server delays during peak times).Key takeaways include practical prompting tips for better results, how to set up and optimize your cameo preferences, and why being descriptive in your prompts makes all the difference. Grant and Corey also discuss the broader implications: Is this OpenAI's answer to TikTok? How does this fit into the AI landscape where every major player now has a social platform? And most importantly - why is everyone making Sam Altman breakdance?Whether you're AI-curious or a seasoned prompt engineer, you'll learn how to navigate Sora 2's interface, avoid common pitfalls, and maybe even create your own viral AI video. Plus, find out why Corey's "realistic physique was not okay on Sora" and had to optimize his cameo settings with ChatGPT's help.➤ CHAPTERSTimecode - Chapter Title0:00 - Introduction: What is Sora 21:03 - Sam Altman is the Tom from MySpace of AI1:57 - Mobile App Tour & Social Features3:42 - Remix Feature: Editing Sam's Bedtime4:12 - The Secret to Better Prompting6:40 - Profile Features & Your Drafts8:44 - Understanding Cameos10:40 - How to Set Up Your Cameo13:00 - Optimizing Cameo Preferences with ChatGPT15:05 - Live Demo of Creating A Video18:25 - Using the Edit Feature20:09 - First Video Results23:32 - Fixing a Bad Video26:49 - Finding & Following People30:33 - Exploring Trending Videos32:50 - Why OpenAI Built a Social Platform35:34 - Training Data Implications38:00 - Voice Input and Pro Prompting Tips40:02 - The First AI-Native Social Media45:43 - Final ThoughtResources: - Sora 2 launch: https://openai.com/index/sora-2/- Download the app https://apps.apple.com/us/app/sora-by-openai/id6744034028- Sora app on the web: https://sora.chatgpt.com/exploreP.S: First comment gets an invite code. Grant has 4 atm :)
-------- Â
46:34
--------
46:34
How OpenAI Beat Every Human Team at the World's Hardest Coding Competition
In this episode, we're joined by Ahmed El-Kishky, research lead at OpenAI, to discuss their historic victory at the International Collegiate Programming Contest (ICPC) where their AI system solved all 12 problems, beating every human team in the world finals. We dive into how they combined GPT-5 with experimental reasoning models, the dramatic last-minute solve, and what this means for the future of programming and AI-assisted science. Ahmed shares behind-the-scenes stories from Azerbaijan, explains how AI learns to test its own code, and discusses OpenAI's path from this win to automating scientific discovery over months and years.Subscribe to The Neuron: https://theneuron.aiWisprFlow: https://wisprflow.ai/neuronOpenAI: https://openai.com
The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube.
Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
Listen to The Neuron: AI Explained, The AI Daily Brief: Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app