Powered by RND
PodcastsTechnologyThe MLSecOps Podcast
Listen to The MLSecOps Podcast in the App
Listen to The MLSecOps Podcast in the App
(7,438)(250,057)
Save favourites
Alarm
Sleep timer

The MLSecOps Podcast

Podcast The MLSecOps Podcast
MLSecOps.com
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing a...

Available Episodes

5 of 41
  • AI Security: Vulnerability Detection and Hidden Model File Risks
    Send us a textIn this episode of the MLSecOps Podcast, the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs. Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
    --------  
    38:19
  • AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk
    Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/ai-governance-essentials-empowering-procurement-teams-to-navigate-ai-risk.In this episode of the MLSecOps Podcast, Charlie McCarthy from Protect AI sits down with Dr. Cari Miller to discuss the evolving landscapes of AI procurement and governance. Dr. Miller shares insights from her work with the AI Procurement Lab and ForHumanity, delving into the essential frameworks and strategies needed to mitigate risks in AI acquisitions. They cover the AI Procurement Risk Management Framework, practical ways to ensure transparency and accountability, and how the September 2024 OMB Memo M-24-18 is guiding AI acquisition in government. Dr. Miller also emphasizes the importance of cross-functional collaboration and AI literacy to support responsible AI procurement and deployment in organizations of all types.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
    --------  
    37:41
  • Crossroads: AI, Cybersecurity, and How to Prepare for What's Next
    Send us a textIn this episode of the MLSecOps Podcast, Distinguished Engineer Nicole Nichols from Palo Alto Networks joins host and Machine Learning Scientist Mehrin Kiani to explore critical challenges in AI and cybersecurity. Nicole shares her unique journey from mechanical engineering to AI security, her thoughts on the importance of clear AI vocabularies, and the significance of bridging disciplines in securing complex systems. They dive into the nuanced definitions of AI fairness and safety, examine emerging threats like LLM backdoors, and discuss the rapidly evolving impact of autonomous AI agents on cybersecurity defense. Nicole’s insights offer a fresh perspective on the future of AI-driven security, teamwork, and the growth mindset essential for professionals in this field.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
    --------  
    33:15
  • AI Beyond the Hype: Lessons from Cloud on Risk and Security
    Send us a textOn this episode of the MLSecOps Podcast, we’re bringing together two cybersecurity legends. Our guest is the inimitable Caleb Sima, who joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience. Caleb's impressive journey includes co-founding two security startups acquired by HP and Lookout, serving as Chief Security Officer at Robinhood, and currently leading cybersecurity venture studio WhiteRabbit & chairing the Cloud Security Alliance AI Safety Initiative.Hosting this episode is Diana Kelley (CISO, Protect AI) an industry powerhouse with a long career dedicated to cybersecurity, and a longtime host on this show. Together, Caleb and Diana share a thoughtful discussion full of unique insights for the MLSecOps Community of learners.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
    --------  
    41:06
  • Generative AI Prompt Hacking and Its Impact on AI Security & Safety
    Send us a textWelcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI!In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
    --------  
    31:59

More Technology podcasts

About The MLSecOps Podcast

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Podcast website

Listen to The MLSecOps Podcast, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.1.1 | © 2007-2024 radio.de GmbH
Generated: 12/26/2024 - 12:49:02 PM