
EP257 Beyond the 'Kaboom': What Actually Breaks When OT Meets the Cloud?
05/1/2026 | 27 mins.
Guest: Chris Sistrunk, Technical Leader, OT Consulting, Mandiant Topics: When we hear "attacks on Operational Technology (OT)" some think of Stuxnet targeting PLCs or even backdoored pipeline control software plot in the 1980s. Is this space always so spectacular or are there less "kaboom" style attacks we are more concerned about in practice? Given the old "air-gapped" mindset of many OT environments, what are the most common security gaps or blind spots you see when organizations start to integrate cloud services for things like data analytics or remote monitoring? How is the shift to cloud connectivity - for things like data analytics, centralized management, and remote access - changing the security posture of these systems? What's a real-world example of a positive security outcome you've seen as a direct result of this cloud adoption? How do the Tactics, Techniques, and Procedures outlined in the MITRE ATT&CK for ICS framework change or evolve when attackers can leverage cloud-based reconnaissance and command-and-control infrastructure to target OT networks? Can you provide an example? OT environments are generating vast amounts of operational data. What is interesting for OT Detection and Response (D&R)? Resources: Video version Cybersecurity Forecast 2026 report by Google Complex, hybrid manufacturing needs strong security. Here's how CISOs can get it done blog "Security Guidance for Cloud-Enabled Hybrid Operational Technology Networks" paper by Google Cloud Office of the CISO DEF CON 23 - Chris Sistrunk - NSM 101 for ICS MITRE ATT&CK for ICS

EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance
15/12/2025 | 32 mins.
Guest: Bruce Schneier Topics: Do you believe that AI is going to end up being a net improvement for defenders or attackers? Is short term vs long term different? We're excited about the new book you have coming out with your co-author Nathan Sanders "Rewiring Democracy". We want to ask the same question, but for society: do you think AI is going to end up helping the forces of liberal democracy, or the forces of corruption, illiberalism, and authoritarianism? If exploitation is always cheaper than patching (and attackers don't follow as many rules and procedures), do we have a chance here? If this requires pervasive and fast "humanless" automatic patching (kinda like what Chrome does for years), will this ever work for most organizations? Do defenders have to do the same and just discover and fix issues faster? Or can we use AI somehow differently? Does this make defense in depth more important? How do you see AI as changing how society develops and maintains trust? Resources: "Rewiring Democracy" book "Informacracy Trilogy" book Agentic AI's OODA Loop Problem EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking AI and Trust AI and Data Integrity EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 RSA 2025: AI's Promise vs. Security's Past — A Reality Check

EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking
08/12/2025 | 29 mins.
Guest: Heather Adkins, VP of Security Engineering, Google Topic: The term "AI Hacking Singularity" sounds like pure sci-fi, yet you and some other very credible folks are using it to describe an imminent threat. How much of this is hyperbole to shock the complacent, and how much is based on actual, observed capabilities today? Can autonomous AI agents really achieve that "exploit - at - machine - velocity" without human intervention for the zero-day discovery phase? On the other hand, why may it actually not happen? When we talk about autonomous AI attack platforms, are we talking about highly resourced nation-states and top-tier criminal groups, or will this capability truly be accessible to the average threat actor within the next 6-12 months? What's the "Metasploit" equivalent for AI-powered exploitation that will be ubiquitous? Can you paint a realistic picture of the worst-case scenario that autonomous AI hacking enables? Is it a complete breakdown of patch cycles, a global infrastructure collapse, or something worse? If attackers are operating at "machine speed," the human defender is fundamentally outmatched. Is there a genuine "AI-to-AI" counter-tactic that doesn't just devolve into an infinite arms race? Or can we counter without AI at all? Given that AI can expedite vulnerability discovery, how does this amplified threat vector impact the software supply chain? If a dependency is compromised within minutes of a new vulnerability being created, does this force the industry to completely abandon the open-source model, or does it demand a radical, real-time security scanning and patching system that only a handful of tech giants can afford? Are current proposed regulations, like those focusing on model safety or disclosure, even targeting the right problem? If the real danger is the combinatorial speed of autonomous attack agents, what simple, impactful policy change should world governments prioritize right now? Resources: "Autonomous AI hacking and the future of cybersecurity" article EP20 Security Operations, Reliability, and Securing Google with Heather Adkins Introducing CodeMender: an AI agent for code security EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks? Daniel Miessler site and podcast "How SAIF can accelerate secure AI experiments" blog "Staying on top of AI Developments" blog

EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation
01/12/2025 | 31 mins.
Guest: Caleb Hoch, Consulting Manager on Security Transformation Team, Mandiant, Google Cloud Topics: How has vulnerability management (VM) evolved beyond basic scanning and reporting, and what are the biggest gaps between modern practices and what organizations are actually doing? Why are so many organizations stuck with 1990s VM practices? Why mitigation planning is still hard for so many? Why do many organizations, including large ones, still rely on unauthenticated scans despite the known importance of authenticated scanning for accurate results? What constitutes a "gold standard" vulnerability prioritization process in 2025 that moves beyond CVSS scores to incorporate threat intelligence, asset criticality, and other contextual factors? What are the primary human and organizational challenges in vulnerability management, and how can issues like unclear governance, lack of accountability, and fear of system crashes be overcome? How is AI impacting vulnerability management, and does the shift to cloud environments fundamentally change VM practices? Resources: EP109 How Google Does Vulnerability Management: The Not So Secret Secrets! EP246 From Scanners to AI: 25 Years of Vulnerability Management with Qualys CEO Sumedh Thakar EP248 Cloud IR Tabletop Wins: How to Stop Playing Security Theater and Start Practicing How Low Can You Go? An Analysis of 2023 Time-to-Exploit Trends Mandiant M Trends 2025 EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators Mandiant Vulnerability Management

EP253 The Craft of Cloud Bug Hunting: Writing Winning Reports and Secrets from a VRP Champion
24/11/2025 | 28 mins.
Guests: Sivanesh Ashok, bug bounty hunter Sreeram KL, bug bounty hunter Topics: We hear from the Cloud VRP team that you write excellent bugbounty reports - is there any advice you'd give to other researchers when they write reports? You are one of Cloud VRP's top researchers and won the MVH (most valuable hacker) award at their event in June - what do you think makes you so successful at finding issues? What is a Bugswat? What do you find most enjoyable and least enjoyable about the VRP? What is the single best piece of advice you'd give an aspiring cloud bug hunter today? Resources: EP220 Big Rewards for Cloud Security: Exploring the Google VRP Cloud Vulnerability Reward Program Rules Insights from BugSWAT Google Cloud's Vulnerability Reward Program Critical Thinking Podcast



Cloud Security Podcast by Google