Powered by RND
PodcastsBusinessThe Secure Developer

The Secure Developer

Snyk
The Secure Developer
Latest episode

Available Episodes

5 of 168
  • Securing The Future Of AI With Dr. Peter Garraghan
    Episode SummaryMachine learning has been around for decades, but as it evolves rapidly, the need for robust security grows even more urgent. Today on the Secure Developer, co-founder and CEO of Mindgard, Dr. Peter Garraghan, joins us to discuss his take on the future of AI. Tuning in, you’ll hear all about Peter’s background and career, his thoughts on deep neural networks, where we stand in the evolution of machine learning, and so much more! We delve into why he chooses to focus on security in deep neural networks before he shares how he performs security testing. We even discuss large language model attacks and why security is the responsibility of all parties within an AI organisation. Finally, our guest shares what excites him and scares him about the future of AI.Show NotesIn this episode of The Secure Developer, host Danny Allan welcomes Dr. Peter Garraghan, CEO and CTO of Mindgard, a company specializing in AI red teaming. He is also a chair professor in computer science at Lancaster University, where he specializes in the security of AI systems.Dr. Garraghan discusses the unique challenges of securing AI systems, which he began researching over a decade ago, even before the popularization of the transformer architecture. He explains that traditional security tools often fail against deep neural networks because they are inherently random and opaque, with no code to unravel for semantic meaning. He notes that AI, like any other software, has risks—technical, economic, and societal.The conversation delves into the evolution of AI, from early concepts of artificial neural networks to the transformer architecture that underpins large language models (LLMs) today. Dr. Garraghan likens the current state of AI adoption to a "great sieve theory," where many use cases are explored, but only a few, highly valuable ones, will remain and become ubiquitous. He identifies useful applications like coding assistance, document summarization, and translation.The discussion also explores how attacks on AI are analogous to traditional cybersecurity attacks, with prompt injection being similar to SQL injection. He emphasizes that a key difference is that AI can be socially engineered to reveal information, which is a new vector of attack. The episode concludes with a look at the future of AI security, including the emergence of AI security engineers and the importance of everyone in an organization being responsible for security. Dr. Garraghan shares his biggest fear—the anthropomorphization of AI—and his greatest optimism—the emergence of exciting and useful new applications.LinksMindgard - Automated AI Red Teaming & Security Testing‍Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    38:19
  • The Future is Now with Michael Grinich (WorkOS)
    Episode SummaryWill AI replace developers? In this episode, Snyk CTO Danny Allan chats with Michael Grinich, the founder and CEO of WorkOS, about the evolving landscape of software development in the age of AI. Michael shares a fascinating analogy, comparing the shift in software engineering to the historical evolution of music, from every family having a piano to the modern era of digital creation with tools like GarageBand. They explore the concept of "vibe coding," the future of development frameworks, and how lessons from the browser wars—specifically the advent of sandboxing—can inform how we build secure AI-driven applications.Show NotesIn this episode, Danny Allan, CTO at Snyk, is joined by Michael Grinich, Founder and CEO of WorkOS, to explore the profound impact of AI on the world of software development. Michael discusses WorkOS's mission to enhance developer joy by providing robust, enterprise-ready features like authentication, user management, and security, allowing developers to remain in a creative flow state. The conversation kicks off with the provocative question of whether AI will replace developers. Michael offers a compelling analogy, comparing the current shift to the historical evolution of music, from a time when a piano was a household staple to the modern era where tools like GarageBand and Ableton have democratized music creation. He argues that while the role of a software engineer will fundamentally change, it won't disappear; rather, it will enable more people to create software in entirely new ways.The discussion then moves into the practical and security implications of this new paradigm, including the concept of "vibe coding," where applications can be generated on the fly based on a user's description. Michael cautions that you can't "vibe code" your security infrastructure, drawing a parallel to the early, vulnerable days of web browsers before sandboxing became a standard. He predicts that a similar evolution is necessary for the AI world, requiring new frameworks with tightly defined security boundaries to contain potentially buggy, AI-generated code.Looking to the future, Michael shares his optimism for the emergence of open standards in the AI space, highlighting the collaborative development around the Model Context Protocol (MCP) by companies like Anthropic, OpenAI, Cloudflare, and Microsoft. He believes this trend toward openness, much like the open standards of the web (HTML, HTTP), will prevent a winner-take-all scenario and foster a more innovative and accessible ecosystem. The episode wraps up with a look at the incredible energy in the developer community and how the challenge of the next decade will be distributing this powerful new technology to every industry in a safe, secure, and trustworthy manner.LinksWorkOS - Your app, enterprise readyWorkOS on YouTubeMITMCP Night 2025Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    33:11
  • Open Authorization In The World Of AI With Aaron Parecki
    Episode SummaryHow do we apply the battle-tested principles of authentication and authorization to the rapidly evolving world of AI and Large Language Models (LLMs)? In this episode, we're joined by Aaron Parecki, Director of Identity Standards at Okta, to explore the past, present, and future of OAuth.  We dive into the lessons learned from the evolution of OAuth 1.0 to 2.1, discuss the critical role of standards in securing new technologies, and unpack how identity frameworks can be extended to provide secure, manageable access for AI agents in enterprise environments.Show NotesIn this episode, host Danny Allan is joined by a very special guest, Aaron Parecki, the Director of Identity Standards at Okta, to discuss the critical intersection of identity, authorization, and the rise of artificial intelligence. Aaron begins by explaining the history of OAuth, which was created to solve the problem of third-party applications needing access to user data without the user having to share their actual credentials. This foundational concept of delegated access has become ubiquitous, but as technology evolves, so do the challenges.Aaron walks us through the evolution of the OAuth standard, from the limitations of OAuth 1 to the flexibility and challenges of OAuth 2, such as the introduction of bearer tokens. He explains how the protocol was intentionally designed to be extensible, allowing for later additions like OpenID Connect to handle identity and DPoP to enhance security by proving possession of a token. This modular design is why he is now working on OAuth 2.1—a consolidation of best practices—instead of a complete rewrite.The conversation then shifts to the most pressing modern challenge: securing AI agents and LLMs that need to interact with multiple services on a user's behalf. Aaron details the new "cross-app access" pattern he is working on, which places the enterprise Identity Provider (IDP) at the center of these interactions. This approach gives enterprise administrators crucial visibility and control over how data is shared between applications, solving a major security and management headache. For developers building in this space today, Aaron offers practical advice: leverage individual user permissions through standard OAuth flows rather than creating over-privileged service accounts.LinksOktaOpenID FoundationIETFThe House Files PDX (YouTube Channel)WIMSEAuthZEN Working Groupaaronpk on GitHubSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    36:07
  • The Evolution Of Platform Engineering With Massdriver CEO Cory O’Daniel
    Episode SummaryDive into the ever-evolving world of platform engineering with Cory O’Daniel, CEO and co-founder of Massdriver. This episode explores the journey of DevOps, the challenges of building and scaling infrastructure, and the crucial role of creating effective abstractions to empower developers. Cory shares his insights on the shift towards platform engineering as a means to build more secure and efficient software by default.Show NotesIn this episode of The Secure Developer, host Danny Allan sits down with Cory O’Daniel, CEO and co-founder of Massdriver, to discuss the dynamic landscape of platform engineering. Cory, a seasoned software engineer and first-time CEO, shares his extensive experience in the Infrastructure as Code (IaC) space, tracing his journey from early encounters with EC2 to founding Massdriver. He offers candid advice for developers aspiring to become CEOs, emphasizing the importance of passion and early customer engagement.  The conversation delves into the evolution of DevOps over the past two decades, highlighting the constant changes in how software is run, from mainframes to serverless containers and now AI. Cory argues that the true spirit of DevOps lies in operations teams producing products that developers can easily use. He points out the challenge of scaling operations expertise, suggesting that IT and Cloud practices need to mature in software development to create better abstractions for developers, rather than expecting developers to become infrastructure experts.  A significant portion of the discussion focuses on the current state of abstractions in IaC. Cory contends that existing public abstractions, like open-source Terraform modules, are often too generic and don't account for specific business logic, security, or compliance requirements. He advocates for operations teams building their own prescriptive modules that embed organizational standards, effectively shifting security left by design rather than by burdening developers. The episode also touches upon the potential and limitations of AI in the operations space, with Cory expressing skepticism about AI's current ability to handle the contextual complexities of infrastructure without significant, organization-specific training data. Finally, Cory shares his optimism for the future of platform engineering, viewing it as a return to the original intentions of DevOps, where operations teams ship software with ingrained security and compliance, leading to more secure systems by default.LinksMassDriverAnsibleChefTerraformDevOps is BullshitElephant in the CloudDockerPostgresOpenTofuHelmRedisElixirSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    40:01
  • The Future Of API Security With FireTail’s Jeremy Snyder
    Episode SummaryJeremy Snyder is the co-founder and CEO of FireTail, a company that enables organizations to adopt AI safely without sacrificing speed or innovation. In this conversation, Jeremy shares his deep expertise in API and AI security, highlighting the second wave of cloud adoption and his pivotal experiences at AWS during key moments in its growth from startup onwards.Show NotesIn this episode of The Secure Developer, host Danny Allan sits down with Jeremy Snyder, the Co-founder and CEO of FireTail, to unravel the complexities of API security and explore its critical intersection with the burgeoning field of Artificial Intelligence. Jeremy brings a wealth of experience, tracing his journey from early days in computational linguistics and IT infrastructure, through a pivotal period at AWS during its startup phase, to eventually co-founding FireTail to address the escalating challenges in API security driven by modern, decoupled software architectures.The conversation dives deep into the common pitfalls and crucial best practices for securing APIs. Jeremy clearly distinguishes between authentication (verifying identity) and authorization (defining permissions), emphasizing that failures in authorization are a leading cause of API-related data breaches. He sheds light on vulnerabilities like Broken Object-Level Authorization (BOLA), explaining how seemingly innocuous practices like using sequential integer IDs can expose entire datasets if server-side checks are missed. The discussion also touches on the discoverability of backend APIs and the persistent challenges surrounding multi-factor authentication, including the human element in security weaknesses like SIM swapping.Looking at current trends, Jeremy shares insights from FireTail's ongoing research, including their annual "State of API Security" report, which has uncovered novel attack vectors such as attempts to deploy malware via API calls. A significant portion of the discussion focuses on the new frontier of AI security, where APIs serve as the primary conduit for interaction—and potential exploitation. Jeremy details how AI systems and LLM integrations introduce new risks, citing a real-world example of how a vulnerability in an AI's web crawler API could be leveraged for DDoS attacks. He speculates on the future evolution of APIs, suggesting that technologies like GraphQL might become more prevalent to accommodate the non-deterministic and data-hungry nature of AI agents. Despite the evolving threats, Jeremy concludes with an optimistic view, noting that the gap between business adoption of new technologies and security teams' responses is encouragingly shrinking, leading to more proactive and integrated security practices.LinksFireTailRapid7Snyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    --------  
    38:00

More Business podcasts

About The Secure Developer

Securing the future of DevOps and AI: real talk with industry leaders.
Podcast website

Listen to The Secure Developer, The Prof G Pod with Scott Galloway and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Secure Developer: Podcasts in Family

Social
v7.23.7 | © 2007-2025 radio.de GmbH
Generated: 9/15/2025 - 8:52:39 AM