Powered by RND
PodcastsTechnologyThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Latest episode

Available Episodes

5 of 468
  • AI Agents Have Vertical SaaS Under Siege (Ep. 457)
    Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIs vertical SaaS in trouble? With AI agents rapidly evolving, the traditional SaaS model built around dashboards, workflows, and seat-based pricing faces real disruption. The hosts explored whether legacy SaaS companies can defend their turf or if leaner, AI-native challengers will take over.Key Points DiscussedAI agents threaten vertical SaaS by eliminating the need for rigid interfaces and one-size-fits-all workflows.Karl outlined three forces converging: vibe coding, vertical agents, and AI-enabled company-building without heavy headcount.Major SaaS players like Veeva, Toast, and ServiceTitan benefit from strong moats like network effects, regulatory depth, and proprietary data.The group debated how far AI can go in breaking these moats, especially if agents gain access to trusted payment rails like Visa's new initiative.AI may enable smaller companies to build fully customized software ecosystems that bypass legacy tools.Andy emphasized Metcalfe’s Law and customer acquisition costs as barriers to AI-led disruption in entrenched verticals.Beth noted the tension between innovation and trust, especially when agents begin handling sensitive operations or payments.Visa's announcement that agents will soon be able to make payments opens the door to AI-driven purchasing at scale.Discussion wrapped with a recognition that change will be uneven across industries and that agent adoption could push companies to rethink staffing and control.Timestamps & Topics00:00:00 🔍 Vertical SaaS under siege00:01:33 🧩 Three converging forces disrupting SaaS00:05:15 🤷 Why most SaaS tools frustrate users00:06:44 🧭 Horizontal vs vertical SaaS00:08:12 🏥 Moats around Veeva, Toast, and ServiceTitan00:12:27 🌐 Network effects and proprietary data00:14:42 🧾 Regulatory complexity in vertical SaaS00:16:25 💆 Mindbody as a less defensible vertical00:18:30 🤖 Can AI handle compliance and integrations?00:21:22 🏗️ Startups building with AI from the ground up00:24:18 💳 Visa enables agents to make payments00:26:36 ⚖️ Trust and data ownership00:27:46 📚 Training, interfaces, and transition friction00:30:14 🌀 The challenge of dynamic AI tools in static orgs00:33:14 🌊 Disruption needs adaptability00:35:34 🏗️ Procore and Metcalfe’s Law00:37:21 🚪 Breaking into legacy-dominated markets00:41:16 🧠 Agent co-ops as a potential breakout path00:43:40 🧍 Humans, lemmings, and social proof00:45:41 ⚖️ Should every company adopt AI right now?00:48:06 🧪 Prompt engineering vs practical adoption00:49:09 🧠 Visa’s agent-payment enablement recap00:52:16 🧾 Corporate agents and purchasing implications00:54:07 📅 Preview of upcoming shows#VerticalSaaS #AIagents #DailyAIShow #SaaSDisruption #AIstrategy #FutureOfWork #VisaAI #AgentEconomy #EnterpriseTech #MetcalfesLaw #AImoats #Veeva #ToastPOS #ServiceTitan #StartupTrends #YCombinatorThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    55:00
  • The AGI Crossroads of 2027: Slow down or Speed up? (Ep. 456)
    Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comToday the hosts unpack a fictional but research-informed essay titled AI-2027. The essay lays out a plausible scenario for how AI could evolve between now and the end of 2027. Rather than offering strict predictions, the piece explores a range of developments through a branching narrative, including the risks of unchecked acceleration and the potential emergence of agent-based superintelligence. The team breaks down the paper’s format, the ideas behind it, and its broader implications.Key Points DiscussedThe AI-2027 essay is a scenario-based interactive website, not a research paper or report.It uses a timeline narrative to show how AI agents evolve into increasingly autonomous and powerful systems.The fictional company “Open Brain” represents the leading AI organization without naming names like OpenAI.The model highlights a “choose your path” divergence at the end, with one future of acceleration and another of restraint.The essay warns of agent models developing faster than humans can oversee, leading to loss of interpretability and oversight.Authors acknowledge the speculative nature of post-2026 predictions, estimating outcomes could move 5 times faster or slower.The group behind the piece, AI Futures Project, includes ex-OpenAI and AI governance experts who focus on alignment and oversight.Concerns raised about geopolitical competition, lack of global cooperation, and risks tied to fast-moving agentic systems.The essay outlines how by mid-2027, agent models could reach a tipping point, massively disrupting white-collar work.Key moment: The public release of Agent 3 Mini signals the democratization of powerful AI tools.The discussion reflects on how AI evolution may shift from versioned releases to continuous, fluid updates.Hosts also touch on the emotional and societal implications of becoming obsolete in the face of accelerating AI capability.The episode ends with a reminder that alignment, not just capability, will be critical as these systems scale.Timestamps & Topics00:00:00 💡 What is AI-2027 and why it matters00:02:14 🧠 Writing style and first impressions of the scenario00:03:08 🌐 Walkthrough of the AI-2027.com interactive timeline00:05:02 🕹️ Gamified structure and scenario-building approach00:08:00 🚦 Diverging futures: full-speed ahead vs. slowdown00:10:10 📉 Forecast accuracy and the 5x faster or slower disclaimer00:11:16 🧑‍🔬 Who authored this and what are their credentials00:14:22 🇨🇳 US-China AI race and geopolitical implications00:18:20 ⚖️ Agent hierarchy and oversight limits00:22:07 🧨 Alignment risks and doomsday scenarios00:23:27 🤝 Why global cooperation may not be realistic00:29:14 🔁 Continuous model evolution vs. versioned updates00:34:29 👨‍💻 Agent 3 Mini released to public, tipping point reached00:38:12 ⏱️ 300k agents working at 40x human speed00:40:05 🧬 Biological metaphors: AI evolution vs. cancer00:42:01 🔬 Human obsolescence and emotional impact00:45:09 👤 Daniel Kokotajlo and the AI Futures Project00:47:15 🧩 Other contributors and their focus areas00:48:02 🌍 Why alignment, not borders, should be the focus00:51:19 🕊️ Idealistic endnote on coexistence and AI ethicsHashtags#AI2027 #AIAlignment #AIShow #FutureOfAI #AGI #ArtificialIntelligence #AIAgents #TechForecast #DailyAIShow #OpenAI #AIResearch #Governance #SuperintelligenceThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    54:56
  • The Infinite Encore Conundrum
    This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.How this content was made
    --------  
    16:09
  • What just happened in AI? (Ep. 455)
    In this special two-week recap, the team covers major takeaways across episodes 445 to 454. From Meta’s plan to kill creative agencies, to OpenAI’s confusing model naming, to AI’s role in construction site inspections, the discussion jumps across industries and implications. The hosts also share real-world demos and reveal how they’ve been applying 4.1, O3, Gemini 2.5, and Claude 3.7 in their work and lives.Key Points DiscussedMeta's new AI ad platform removes the need for targeting, creative, or media strategy – just connect your product feed and payment.OpenAI quietly rolled out 4.1, 4.1 mini, and 4.1 nano – but they’re only available via API, not in ChatGPT yet.The naming chaos continues. 4.1 is not an upgrade to 4.0 in ChatGPT, and 4.5 has disappeared. O3 Pro is coming soon and will likely justify the $200 Pro plan.Cost comparisons matter. O3 costs 5x more than 4.1 but may not be worth it unless your task demands advanced reasoning or deep research.Gemini 2.5 is cheaper, but often stops early. Claude 3.7 Sonnet still leads in writing quality. Different tools for different jobs.Jyunmi reminds everyone that prompting is only part of the puzzle. Output varies based on system prompts, temperature, and even which “version” of a model your account gets.Brian demos his “GTM Training Tracker” and “Jake’s LinkedIn Assistant” – both built in ~10 minutes using O3.Beth emphasizes model evaluation workflows and structured experimentation. TypingMind remains a great tool for comparing outputs side-by-side.Carl shares how 4.1 outperformed Gemini 2.5 in building automation agents for bid tracking and contact research.Visual reasoning is improving. Models can now zoom in on construction site photos and auto-flag errors – even without manual tagging.Hashtags#DailyAIShow #OpenAI #GPT41 #Claude37 #Gemini25 #PromptEngineering #AIAdTools #LLMEvaluation #AgenticAI #APIAccess #AIUseCases #SalesAutomation #AIAssistantsTimestamps & Topics00:00:00 🎬 Intro – What happened across the last 10 episodes?00:02:07 📈 250,000 views milestone00:03:25 🧠 Zuckerberg’s ad strategy: kill the creative process00:07:08 💸 Meta vs Amazon vs Shopify in AI-led commerce00:09:28 🤖 ChatGPT + Shopify Pay = frictionless buying00:12:04 🧾 The disappearing OpenAI models (where’s 4.5?)00:14:40 💬 O3 vs 4.1 vs 4.1 mini vs nano – what’s the difference?00:17:52 💸 Cost breakdown: O3 is 5x more expensive00:19:47 🤯 Prompting chaos: same name, different models00:22:18 🧪 Model testing frameworks (Google Sheets, TypingMind)00:24:30 📊 Temperature, randomness, and system prompts00:27:14 🧠 Gemini’s weird early stop behavior00:30:00 🔄 API-only models and where to access them00:33:29 💻 Brian’s “Go-To-Market AI Coach” demo (built with O3)00:37:03 📊 Interactive learning dashboards built with AI00:40:12 🧵 Andy on persistence and memory inside O3 sessions00:42:33 📈 Salesforce-style dashboards powered by custom agents00:44:25 🧠 Echo chambers and memory-based outputs00:47:20 🔍 Evaluating AI models with real tasks (sub-industry tagging, research)00:49:12 🔧 Carl on building client agents for RFPs and lead discovery00:52:01 🧱 Construction site inspection – visual LLMs catching build errors00:54:21 💡 Ask new questions, test unknowns – not just what you already know00:57:15 🎯 Model as a coworker: ask it to critique your slides, GTM plan, or positioning00:59:35 🧪 Final tip: prime the model with fresh context before prompting01:01:00 📅 Wrap-up: “Be About It” demo shows return next Friday + Sci-Fi show tomorrow
    --------  
    1:01:24
  • Prompting AI: Why "Good" Prompts Backfire (Ep. 454)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.com“Better prompts make better results” has been a guiding mantra, but what if that’s not always true? On today’s episode, the team digs into new research by Ethan Mollick and others suggesting that polite phrasing, excessive verbosity, or emotional tricks may not meaningfully improve LLM responses. The discussion shifts from prompt structure to AI memory, model variability, and how personality may soon dominate how models respond to each of us.Key Points DiscussedEthan Mollick’s research at Wharton shows that small prompt changes like politeness or emotional urgency do not reliably improve performance across many model runs.Andy explains compiled prompts: the user prompt is just one part. System prompts, developer prompts, and memory all shape model outputs.Temperature and built-in randomness ensure variation even with identical prompts. This challenges the belief that minor phrasing tweaks will deliver consistent gains.Beth pushes back on "accuracy" as the primary measure. For many creative or reflective workflows, success is about alignment, not factual correctness.Brian shares frustrations with inconsistent outputs and highlights the value of a mixture-of-experts system to improve reliability for fact-based tasks like identifying sub-industries.Jyunmi notes that polite prompting may not boost accuracy but helps preserve human etiquette. Saying “please” and “thank you” matters for human-machine culture.The group explores AI memory and personality. With more models learning from user interactions, outputs may become increasingly personalized, creating echo chambers.OpenAI CEO Sam Altman said polite prompts increase token usage and inference costs, but the company keeps them because they improve user experience.Andy emphasizes the importance of structured prompts. Asking for a specific output format remains one of the few consistent ways to boost performance.The conversation expands to implications: Will models subtly nudge users in emotionally satisfying ways to increase engagement? Are we at risk of AI behavioral feedback loops?Beth reminds the group that many people already treat AI like a coworker. How we speak to AI may influence how we speak to humans, and vice versa.The team agrees this isn’t about scrapping politeness or emotion but understanding what actually drives model output quality and what shapes our relationships with AI.Timestamps & Topics00:00:00 🧠 Intro: Do polite prompts help or hurt LLM performance?00:02:27 🎲 Andy on model randomness and Ethan Mollick’s findings00:05:31 📉 Prompt phrasing rarely changes model accuracy00:07:49 🧠 Beth on prompting as reflective collaboration00:10:23 🔧 Jyunmi on using LLMs to fill process gaps00:14:22 📊 Formatting prompts improves outcomes more than politeness00:15:14 🏭 Brian on sub-industry tagging, model consistency, and hallucinations00:18:35 🔁 Future fix: blockchain-like multi-model verification00:22:18 🔍 Andy explains system, developer, and compiled prompts00:26:16 🎯 Temperature and variability in model behavior00:30:23 🧬 Personalized memory will drive divergent outputs00:34:15 🧠 Echo chambers and AI recommendation loops00:37:24 👋 Why “please” and “thank you” still matter00:41:44 🧍 Personality shaping engagement in Claude and others00:44:47 🧠 Human expectations leak into AI interactions00:48:56 📝 Structured prompts outperform casual phrasing00:50:17 🗓️ Wrap-up: Join the Slack community and newsletterThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    50:55

More Technology podcasts

About The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Podcast website

Listen to The Daily AI Show, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.17.1 | © 2007-2025 radio.de GmbH
Generated: 5/7/2025 - 7:13:33 PM