EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines
Guest Alan Braithwaite, Co-founder and CTO @ RunReveal Topics: SIEM is hard, and many vendors have discovered this over the years. You need to get storage, security and integration complexity just right. You also need to be better than incumbents. How would you approach this now? Decoupled SIEM vs SIEM/EDR/XDR combo. These point in the opposite directions, which side do you think will win? In a world where data volumes are exploding, especially in cloud environments, you're building a SIEM with ClickHouse as its backend, focusing on both parsed and raw logs. What's the core advantage of this approach, and how does it address the limitations of traditional SIEMs in handling scale? Cribl, Bindplane and “security pipeline vendors” are all the rage. Won’t it be logical to just include this into a modern SIEM? You're envisioning a 'Pipeline QL' that compiles to SQL, enabling 'detection in SQL.' This sounds like a significant shift, and perhaps not to the better? (Anton is horrified, for once) How does this approach affect detection engineering? With Sigma HQ support out-of-the-box, and the ability to convert SPL to Sigma, you're clearly aiming for interoperability. How crucial is this approach in your vision, and how do you see it benefiting the security community? What is SIEM in 2025 and beyond? What’s the endgame for security telemetry data? Is this truly SIEM 3.0, 4.0 or whatever-oh? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures “20 Years of SIEM: Celebrating My Dubious Anniversary” blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog tl;dr security newsletter Introducing a RunReveal Model Context Protocol Server! MCP: Building Your SecOps AI Ecosystem AI Runbooks for Google SecOps: Security Operations with Model Context Protocol
--------
27:09
EP227 AI-Native MDR: Betting on the Future of Security Operations?
Guests: Eric Foster, CEO of Tenex.AI Venkata Koppaka, CTO of Tenex.AI Topics: Why is your AI-powered MDR special? Why start an MDR from scratch using AI? So why should users bet on an “AI-native” MDR instead of an MDR that has already got its act together and is now applying AI to an existing set of practices? What’s the current breakdown in labor between your human SOC analysts vs your AI SOC agents? How do you expect this to evolve and how will that change your unit economics? What tasks are humans uniquely good at today’s SOC? How do you expect that to change in the next 5 years? We hear concerns about SOC AI missing things –but we know humans miss things all the time too. So how do you manage buyer concerns about the AI agents missing things? Let’s talk about how you’re helping customers measure your efficacy overall. What metrics should organizations prioritize when evaluating MDR? Resources: Video EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 (quote from Eric in the title!) EP10 SIEM Modernization? Is That a Thing? Tenex.AI blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog The original ASO 10X SOC paper that started it all (2021) “Baby ASO: A Minimal Viable Transformation for Your SOC” blog “The Return of the Baby ASO: Why SOCs Still Suck?” blog "Learn Modern SOC and D&R Practices Using Autonomic Security Operations (ASO) Principles" blog
--------
23:58
EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
Guest: Christine Sizemore, Cloud Security Architect, Google Cloud Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? We are all hearing about agentic security – so can we just ask the AI to secure itself? Top 3 things to do to secure AI software supply chain for a typical org? Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments” “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) "RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)
--------
24:39
EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP2 Christian Karam on the Use of AI
Hosts: David Homovich, Customer Advocacy Lead, Office of the CISO, Google Cloud Alicja Cade, Director, Office of the CISO, Google Cloud Guest: Christian Karam, Strategic Advisor and Investor Resources: EP2 Christian Karam on the Use of AI (as aired originally) The Cyber-Savvy Boardroom podcast site The Cyber-Savvy Boardroom podcast on Spotify The Cyber-Savvy Boardroom podcast on Apple Podcasts The Cyber-Savvy Boardroom podcast on YouTube Now hear this: A new podcast to help boards get cyber savvy (without the jargon) Board of Directors Insights Hub Guidance for Boards of Directors on How to Address AI Risk
--------
24:46
EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps
Guest: Diana Kelley, CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure.
We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit.
We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.